Find out what AI really means for Litigation in our upcoming webinar on 13th May 2025, 12pm BST

AI in the Courts: A Significant Moment for the Judiciary?

A recent tax appeal decision may represent a quiet but meaningful first in the UK judicial system. A Tribunal Judge openly disclosed the use of AI.

Articles
Eimear McCann

October 1, 2025

Table of Contents

A recent tax appeal decision - VP Evans (as executrix of HB Evans, deceased) & Ors v The Commissioners for HMRC - may represent a quiet but meaningful first in the UK judicial system. A Tribunal Judge openly disclosed the use of AI in preparing his judgment. The Judge explained not only that AI had been used, but also how and why it was deployed.

What makes this significant is not the novelty of the technology, but the way in which it was applied. i.e.,  AI was used solely to produce a first-draft summary, with the methodology made transparent in the judgment itself. That approach sits neatly within the 2024 Practice Direction on Reasons for Decisions, which emphasises the need for clear and concise reasoning, and the 2025 AI Guidance for Judicial Office Holders, which highlights the principles of security, verification, and judicial ownership.

1. Transparency in Judicial Practice

In paragraph 42 of the decision, the Judge stated plainly: “I have used AI in the production of this decision.”

Whilst this disclosure is not mandatory - Judges are not required to explain the tools or processes they use when researching or preparing judgments; the Tribunal Judge in, on this occasion, deemed it appropriare. By choosing to disclose the use of AI, the Judge aligned with the spirit of section 3(6) of the AI Guidance for Judicial Office Holders, which encourages personal accountability; “Judges are not generally obliged to describe the research or preparatory work which may have been done in order to produce a judgment. Provided these guidelines are appropriately followed, there is no reason why generative AI could not be a potentially useful secondary tool.”

2. Anchoring the Use of AI in Existing Guidance

The Judge supported his approach with explicit reference to both the Practice Direction on Reasons for Decisions (4 June 2024) and the updated AI guidance for the judiciary (as above). Quoting directly from the Practice Direction, he observed:

“Modern ways of working, facilitated by digital processes, will generally enable greater efficiencies in the work of the tribunals, including the logistics of decision-making. Full use should be made of any tools and techniques that are available to assist in the swift production of decisions.”
“I regard AI as such a tool, and this is the first decision in which I have grasped the nettle of using it. Although judges are not generally obliged to describe the research or preparatory work which may have been done in order to produce a judgment, it seems to me appropriate, in this case, for me to say what I have done.”

3. Why This Case Was “Suited” to AI

The Judge also explained why this particular application was especially appropriate for AI support. He identified five factors:

  1. It was a discrete case-management matter.
  2. It was dealt with entirely on the papers, without a hearing.
  3. The parties’ positions were set out solely in written submissions and supporting materials.
  4. No evidence was heard.
  5. No assessment of honesty or credibility was required.

This reasoning opens a wider conversation about the types of judicial work where AI might be suitable, e.g., administrative or paper-based matters, and where it might be inappropriate, such as fact-finding or credibility assessments.

4. Security and Confidentiality

The Judge stressed the importance of security, noting:

“As long as judicial office holders are logged into their eJudiciary accounts, the data they enter into Copilot remains secure and private. Unlike other large language models, it is not made public.”

The clear disparity between tools which scrape information from the public domain, and those which remain secure in private tenancies is of fundamental importance, particularly in light of the recent controversy of AI hallucinations before the UK Courts.

5. AI as a Drafting Aid, and Not a Decision-Maker

Importantly, the Judge clarified that AI was used only to create a draft summary, not to conduct legal research, with the output carefully checked and edited. As many practitioners know, AI “hallucinations” are not limited to fictitious case citations; they can also distort summaries or factual narratives.

The Judge underlined that responsibility for the judgment rested entirely with him:

“This decision has my name at the end. I am the decision-maker, and I am responsible for this material. The judgment applied – in the sense of the evaluative faculty, weighing-up the arguments, and framing the terms of the order – has been entirely mine.”

This distinction is crucial. AI may accelerate drafting, but judicial oversight and authorship remain paramount.

A Quiet, Yet Significant Step?

This development may be subdued, but it is still important. Whilst the decision has not been outsourced to AI, it highlights what we already know, i.e., AI is incredibly useful when properly engaged in the sifting and structuring of written materials, thus saving valuable judicial time.

More importantly, this case opens up a conversation about transparency more generally. Will other Judges be willing to be as candid about the use of similar tools, and do they in fact need to? Will AI become so commonplace in the courtroom that the discretion to disclose their engagement will simply fade away? Conversely, will legal teams be deemed to be either disadvantaged or indeed “negligent” if they fail to use AI tools on offer?

The conversation is just getting started.

See TrialView in action

As one of the leading litigation platforms, TrialView is the perfect addition to your tech stack. Book a free demo to get started.

Book a demo
---------------- -----------------