Find out what AI really means for Litigation in our upcoming webinar on 13th May 2025, 12pm BST

AI in Court: The Judiciary’s Updated Guidance

Artificial Intelligence is no longer a futuristic concept, so the judiciary has updated their guidance for all judicial office-holders on the use of AI.

Articles
Batsalya Mishra

December 11, 2025

Table of Contents

Artificial Intelligence is no longer a futuristic concept; it's a present-day tool that is constantly evolving. Recognising this, the judiciary has updated their guidance for all judicial office-holders on the use of AI in courts and tribunals. This update provides a crucial framework for harnessing AI's potential while safeguarding the core principles of justice.

Here’s a breakdown of the essential rules.

The Foundation: Integrity Over Innovation

The guidance is built on one non-negotiable principle: any use of AI must be consistent with the judiciary’s overarching obligation to protect the integrity of the administration of justice. AI is a tool, not a substitute for judicial judgement and accountability.

The Core Principles for Using AI

The guidance focuses on mitigating the most significant risks associated with AI, particularly public chatbots like ChatGPT and Google Gemini.

  1. Understand the Limits: The document delivers a stark warning: AI is not a legal oracle. It explains that these tools are sophisticated pattern-matching engines, not authoritative databases. They are prone to "hallucinations": inventing fictitious cases, citations, and legal principles. Their knowledge may be outdated and often has a heavy US legal bias, making them unreliable for primary legal research.
  2. Guard Confidentiality Fiercely: This is one of the most critical directives. The guidance states that you should "treat all public AI tools as being capable of making public anything entered into them." Judges are warned never to input private, confidential, or non-public information. Even with chat history disabled, the assumption must be that any entered data could be disclosed.
  3. You Are Ultimately Accountable: The judge's responsibility is absolute. The guidance emphasises that judicial office holders are personally responsible for any material produced in their name. While AI can assist, judges must always read the underlying evidence and verify every piece of AI-generated information against primary sources.

When Others Use AI: A New Courtroom Dynamic

The guidance wisely looks beyond the judge's bench, acknowledging that lawyers and litigants are also using these tools.

  • For Lawyers: While there's no need to disclose the use of AI if used responsibly, the guidance suggests it may be necessary to remind lawyers of their duty to independently verify all AI-generated research.
  • For Litigants in Person: This is a key area of concern. The guidance notes that unrepresented parties may use AI as their only legal advisor, unaware of its flaws. Judges are encouraged to inquire about its use if suspected and to remind litigants that they are responsible for the accuracy of their submissions.

Spotting the Signs of AI

The document includes helpful red flags that may indicate AI-generated content, such as:

  • Unfamiliar case citations (often from the US).
  • American spellings.
  • Text that seems polished but contains obvious substantive errors.
  • The accidental inclusion of AI prompts (e.g., "As an AI language model...").

Conclusion

The judiciary's approach to AI is one of pragmatic caution. It opens the door to using technology for efficiency but slams it shut on anything that threatens the accuracy, confidentiality, and integrity of the judicial process. The message is clear: embrace the tool, but trust in the timeless principles of justice.

See TrialView in action

As one of the leading litigation platforms, TrialView is the perfect addition to your tech stack. Book a free demo to get started.

Book a demo
---------------- -----------------