Introduction
AI systems are increasingly being deployed in the judiciary to assist with administrative, analytical and even adjudicative tasks. These developments are often justified by the promise of greater efficiency, consistency and access to justice. Using the recent introduction of automatic transcription services in Slovenian courts as a starting point, this blogpost critically examines the assumptions underpinning such efficiency-driven narratives. I argue that persistent technical and legal challenges, like accuracy limitations, complex oversight requirements, the gradual erosion of human (judicial) expertise and growing appeals before human judges, can result in the efficiency paradox of judicial AI, where technologies intended to accelerate proceedings ultimately slow justice down. I therefore call for caution in adopting AI within the judiciary amid the enthusiastic claims about its benefits.
-
The Slovenian Case: Automatic Transcription Service in Courts
In October 2025, it was reported that Slovenian courts had successfully introduced an automatic transcription and live captioning system in their judicial system, following earlier experiments with such technologies and in line with their broader efforts to digitalise the judiciary in pursuit of greater efficiency, accessibility and transparency. After a successful pilot project at the Ljubljana District Court, the system was implemented throughout the country. Automatic transcription tools convert audio (and in some cases video) content into written text, while live capturing systems generate real-time textual representations of spoken content. These speech-to-text applications rely on natural language processing (NLP) and machine learning models trained on large corpora of audio and textual data. Their typical workflow in courts involves audio capture, model inference, the generation of a draft transcript and (if necessary) human correction.
Such systems are frequently promoted for their speed and efficiency, and subsequent ability to reduce the burden on court personnel. Instead of judges or clerks having to take notes manually, which may be incomplete or prone to error, or, in common law jurisdictions court reporters having to produce verbatim transcriptions of court hearings, an automated tool can perform much of this work or also help with understanding testimonies of softly spoken or nervous witnesses. Beyond institutional benefits, automatic transcription and live captioning systems support the social inclusion of persons with disabilities, especially those with hearing loss. Their adoption is in line with Article 13 of the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD) and the European Victims’ Rights Directive (2012/29/EU), both of which require that persons with disabilities receive effective and equal access to justice, including access to information, interpretation and translation.
Against this backdrop, courts in different countries are increasingly exploring the integration of recording, transcription and translation tools into judicial processes.
-
Broader Digitalisation Trends in The Judiciary and its Quest for Efficiency, Consistency and Access to Justice
The turn to text-to-speech technologies is embedded within a broader European and global trend toward the digitalisation of judicial processes. Beyond automatic transcription and live capturing, courts are experimenting with other NLP applications, like e-discovery tools that sift through large volumes of legal documents to identify relevant facts or legal sources, automated summarisation systems, tools for anonymisation and pseudonymisation of court documents or AI-enhanced legal research tools. More recently, the deployment of AI has been considered for more substantive judicial support functions. Think, for instance, about risk assessments (for bail or recidivism), sentencing support, predictive legal analytics, and even the preliminary drafting of judgements. In this sense, AI systems constitute a new layer of infrastructural mediation between judges and the public.
The primary rationale driving this digitalisation agenda is, unsurprisingly, efficiency. Courts face substantial caseloads and persistent backlogs. Digital tools are therefore seen as a mechanisms for accelerating the processing of large amount of legal data, allowing judicial actors to focus on tasks deemed to require human judgements. A second frequently cited rationale concerns the promotion of consistency and objectivity in judicial outcomes. Given that judicial decision-making can be affected by extra-legal factors, such as a judge’s mood, hunger or a football match outcome, some scholars have argued that AI tools could reduce unwarranted variety, like sentencing disparity, and thereby enhancing the principle of equality before the law. As noted above, digital tools are also heralded for their potential to advance substantive rights, including access to justice, by providing vulnerable people quicker and more user-friendly access to legal information.
Remarkably, however, these rationales remain assumed rather than empirically demonstrated. Although the goals are appealing, such one-sided approach solely focusing on the positive effects would overlook risks and challenges associated with the use of AI in the judiciary. Different scholars have been researching concerns regarding biased AI outcomes, opacity of AI systems and lack of transparency, unclear attribution of accountability, but also specific implications for the right to a fair trial, such as judicial independence and impartiality and the judicial duty to state reasons. In this blogpost, I aim to show how different technical and legal challenges may ultimately contribute to what I call the efficiency paradox of the judicial AI.
-
The Efficiency Paradox on How AI Slows Down Justice
In what follows, I argue that while the deployment of AI systems in the judiciary may seem to generate efficiency opportunities, it also introduces a series of challenges, many of which have already been documented. Taken together, these challenges raise a critical question: might the use of AI in courts ultimately result in an efficiency paradox, in which technologies intended to accelerate judicial work instead slow it down?
A first source of paradox arises from the validation and oversight costs that may outweigh the anticipated efficiency gains. The accuracy of AI systems is heavily dependent on the availability of large, diverse and representative training datasets. For automatic transcription tools, this requires extensive audio and text data over a range of different acoustic conditions and linguistic variations, including dialects, accents and sociolects. It not, this could result in the continual degrade with new accents, domain-specific jargon and changing linguistic usage. For applications such as case law summarisation, case law retrieval systems or AI-assisted judgement drafting, the underlying models similarly rely on sufficiently rich and representative corpora of judicial decisions from which to infer relevant patterns. Yet such datasets are often unavailable. For instance, in Belgium, only 1% of case law is published (which is an issue that extends beyond the scope of this blogpost). Where training data is lacking or unrepresentative, the consequences can ultimately undermine efficiency. In transcription, for instance, certain accents or minority languages may be misrecognised, which could lead inaccuracies in the record, the need for manual corrections and thus delays. Similar concerns could arise for automatic translation tools that are not up to task and have been shown to misinterpret legal terminology. Generative AI, too, comes with risks of amplification of biases in outputs. To prevent such issues, AI models require continuous monitoring and validation to ensure accuracy. This ongoing oversight, however, comes with significant resource demands. Moreover, many legal and ethical guidelines require that judges remain ultimately responsible for the outputs of AI systems (see, for instance, recital 61 of the European AI Act or the UNESCO Guidelines). This responsibility creates an additional layer of review. When generative AI is used to assist in drafting judgements, judges must (ideally) verify every factual assertion (given the risk of inaccuracies or manipulation), scrutinise all legal references and independently assess the reasoning steps, which often remain opaque. In complex or sensitive cases, this review process may take longer than drafting the decision manually. As a result, any promised efficiency gains may be neutralised by the need for intensive quality control, periodic retraining and auditing of models, and correction of their outputs.
A second dimension of the efficiency paradox concerns the implications of AI systems for the right to a fair trial, and in particular, the judicial duty to state reasons. Judges must explain their outcomes in a manner that enables litigants to understand and contest them. Yet the opaque nature of many AI models increasingly shifts the burden onto judges to interpret and verify AI generated outputs. In earlier work, I have examined the prospects of a more extensive and robust judicial duty to state reasons when AI systems are involved, one that would require judges to articulate not only the legal and factual foundations of their decisions, but also technical and pragmatic explanations related to the AI system’s role. Such an expectation would entail an additional burden and would in many cases be unfeasible in practice. Indeed, “[i]nsisting on rigorous and thorough examination of any content generated by these systems defeats the exact purpose of time-efficiency arguments in favour of AI systems.” This epistemic overhead reduces efficiency because human reasoning must now be combined with meta-monitoring of AI reasoning.
A further concern contributing to the efficiency paradox relates to the increasing reliance of judges and clerks on AI systems that could lead to what Ganuthula has termed the ‘Paradox of Augmentation’. The concept suggests that while reliance on AI may initially augment human performance and improve productivity, its sustained use may gradually erode human skills and professional competences. If, for instance, judicial actors routinely rely on transcription tools or AI assisted judgement drafting, the short-term time savings may be offset over the long term by a decline in critical reasoning skills, which are essential in adjudication given the high stakes involved. Continuous delegation of cognitive tasks reduces the opportunity to exercise and develop own analytical capacities. Excessive dependency on AI systems, or ‘cognitive offloading’, may therefore lead to unlearning key competencies. In turn, this can produce inefficiencies: judges or clerks may become slower at independently verifying outputs, less capable of identifying errors or inconsistencies, and ultimately less adept at performing without technological support, which may slow down judicial proceedings in the end.
A final, more systemic concern relates to the potential consequences for appeal proceedings. Errors in AI systems, the opacity of algorithmic reasoning, and the difficulty of providing explanations for AI generated outcomes, combined with the deteriorating human expertise and perceived reduction in judicial agency, may collectively erode public trust in the judiciary. Reduced confidence in the integrity and competence of decision-making processes can trigger more frequent appeals. As noted by Sr Robert James Buckland “If a court is fully automated, poor public perception of the technology would also certainly increase the number of appeals to a human judge. This would reduce many of the efficiency gains that make AI adjudications so appealing.”
-
Quo Vadis for AI in the Judiciary?
The central message of this blogpost is a call for caution in the face of overly enthusiastic narratives about AI in the judiciary. Efficiency is often treated as the central aspiration of court digitalisation. Yet such narratives tend to rely on a linear understanding of technological intervention while courts function within a complex socio-technical environment. Efficiency gains in one narrowly defined task, such as faster transcriptions or draft judgements, may introduce inefficiencies at higher levels through increased oversight, verification demands, governance requirements and maintenance burdens.
These issues are not merely practical, they touch upon important normative stakes. The judiciary exercises public authority and must preserve its independence and legitimacy. Public trust in courts depends not only on timeliness (though, important), but equally on due process, fairness and accountability. There is also a risk that technologies presented as tools for efficiency subtly shift power from judicial actors to technical infrastructures, thereby altering institutional dynamics in ways that remain insufficiently examined. Hence, if IA systems are to be deployed in courts, it should not be on the basis of efficiency claims alone.






