I’ve been talking about Judges a lot recently, my father-in-law, a prominent SC/Silk in South Africa and Acting Judge passed away a few weeks ago, and its been an incredible sombre time for our family, specifically my wife, her siblings and my mother-in-law, I’ve been regularly reminded of what a calm headed, fair and empathetic man he was - and how, being able to understand people, deep within them, is what makes a Judge great.
As someone who has spent over two decades navigating the complexities of human capital across developing markets, I've witnessed firsthand how technology disrupts traditional roles while creating new opportunities.
The conversation around artificial intelligence replacing judges strikes me as both fascinating and fundamentally misguided.
Let me share why I believe we're asking the wrong question entirely.
When I moved to Hong Kong in 2009, in the middle of the global financial crisis, to start The GRM Group with no prior knowledge of the Asian legal market, I learned a crucial lesson: successful transformation comes from understanding what truly matters to people, not from trying to eliminate the human element entirely.
The same principle applies to our judicial system.
The notion that AI will "replace courts" reveals a fundamental misunderstanding of what justice actually requires. In my experience working across 60+ cities and four continents, I've learned that the most critical decisions—whether in hiring talent or delivering justice—require something that no algorithm can provide: authentic human judgment combined with accountability.
Current AI deployments in courts are doing exactly what good technology should do—they're augmenting human capability, not replacing it.
From transcribing hearings to drafting initial opinions, these tools free judges to focus on what matters most: weighing equity, considering societal values, and making decisions that reflect our collective moral compass.
This mirrors what we've seen in legal recruitment, where technology has enhanced our ability to match talent with opportunities but hasn't eliminated the need for human insight into character, cultural fit, and potential.
Based on my observations across global markets, I see AI integration in the judiciary following three distinct tiers:
Assistive AI represents the current reality—speech-to-text, translation services, and administrative efficiency tools. These innovations, like the "lights-out" filing systems in Palm Beach or Hangzhou's Xiao Zhi assistant, streamline operations while keeping human judges firmly in control.
Decision-support AI is where things get interesting. Tools like COMPAS risk assessment scores and China's Suzhou Court deviation analysis provide judges with data-driven insights to inform their decisions.
This reminds me of how we use market intelligence and candidate analytics at GRM—the data informs our recommendations, but the final placement decision always involves human judgment about factors that can't be quantified.
Autonomous AI remains largely experimental, and for good reason. Estonia's stalled "robot judge" pilot illustrates why fully automated judicial decisions raise fundamental concerns about legitimacy and due process.
Even China, despite its aggressive tech adoption, explicitly states that only humans may sign judgments in their 2025 roadmap.
Throughout my career, I've learned that trust and authenticity are the foundation of any successful relationship—whether it's between a recruiter and client or between the justice system and society. This is why judges will remain central to our legal system:
Democratic Legitimacy: Courts derive their authority from transparent reasoning and public accountability. Black-box AI models simply cannot satisfy the fundamental duty to give comprehensible reasons for decisions that affect people's lives. This is similar to how our clients at GRM expect clear explanations for our recommendations—trust requires transparency.
Error Management and Equity: AI systems propagate biases hidden in their training data, as we've seen with controversial recidivism assessment tools. Judges provide the essential human backstop, able to recognize when algorithmic recommendations don't account for unique circumstances or emerging social values.
Constitutional Balance: An independent judiciary serves as a crucial check on both executive power and algorithmic overreach. Delegating final authority to software would concentrate power in opaque code bases, threatening the very foundation of our democratic system.
Moral and Cultural Interpretation: New dilemmas—from genetic privacy to environmental justice—require the kind of ethical reasoning that goes beyond statistical inference. These decisions demand the wisdom that comes from lived experience and moral reflection.
Drawing from my experience in transforming recruitment practices across emerging markets, I see the judicial role evolving rather than disappearing.
Future judges will need to become:
Algorithmic Auditors: Just as modern recruiters must understand applicant tracking systems and AI matching tools, judges will need technological literacy to interrogate model outputs and identify hidden biases.
Hybrid Panel Leaders: Complex disputes may require mixed benches combining human judges with certified AI systems generating parallel analyses—similar to how we now use multiple data sources and human insight to make placement decisions1.
Procedural Justice Guardians: Ensuring that litigants can challenge algorithmic evidence and that decisions remain explainable will become a primary judicial responsibility.
Based on my experience navigating technological transformation in recruitment, I believe several policy imperatives are essential:
First, we must mandate meaningful human control, codifying that AI output remains advisory unless affirmed by a judge. Second, we need robust certification and audit regimes for judicial AI systems, ensuring fairness, transparency, and security. Third, explainability must be non-negotiable—parties deserve comprehensible rationales and the ability to contest AI-informed decisions.
Most importantly, we must invest heavily in judicial AI training. Without technical competence, judges risk either blind reliance or undue skepticism—both harmful to justice1.
After twenty years of helping legal professionals navigate career transitions across developing markets, I've learned that the most successful transformations honour what makes us fundamentally human while embracing tools that enhance our capabilities.
The future of justice isn't about choosing between humans and machines—it's about creating a partnership where algorithms process data while judges render judgment.
The court of tomorrow will be cyborg, not robotic. AI will streamline operations and may handle routine, low-stakes matters, but the human judge—as arbiter of values, interpreter of law, and guardian of legitimacy—remains irreplaceable.
Abdicating human oversight wouldn't just erode public trust; it would weaken the rule of law itself. Our challenge is crafting governance that ensures machines serve justice rather than supplant those sworn to uphold it.
Having witnessed countless transformations across global markets, I'm confident that those who embrace this hybrid future thoughtfully will emerge stronger, more effective, and more trusted than ever before.
Rob Green is the Founder and CEO of The GRM Group, a multi-award-winning legal executive search and strategic management consultancy. With over 20 years of experience across 60+ cities and four continents, he brings unique insights into how technology transforms professional services while preserving essential human elements.
RIP Boet, thanks for everything.