Lee Sharpe | 02 Oct 2024

Lee Sharpe, author of Bloomsbury Professional Online’s One Month in a Minute update, reviews the recent CTA annual address


I watched the Address, which is still freely available to view at the time of writing.

I found it useful overall, and certainly accessible.

The principal speaker was Conrad Young, who initially undertook to explain the systemic distinction between traditional coding and new machine learning/generative AI.

He then took us through the history of how tax authorities (not only HMRC) have embraced digitalisation – the adoption of IT/digital channels – and, in particular, have sought to acquire ever more data to analyse (because where’s the harm?)

Two illustrative examples of “AI in action” were offered:

  1. French tax authorities used satellite data to identify and assess more than 100,000 previously-undisclosed domestic swimming pools; while
  2. The Dutch Government apparently resigned after their tax authority wrongly accused more than 20,000 families of child benefit fraud, based on a poorly-designed risk assessment tool that had utilised machine learning

Conrad then turned to how matters had progressed in the profession, such as predicting how a court might rule in a particular case.

Taking a step back, he considered how the tax system overall might work in future. That future is one where “tax just happens” – where rules are embedded, and everything works in real time. For example, where payments are split automatically, and the tax authority is paid its VAT before the business actually receives the customer’s payment. There would be less need for tax returns, as everything was managed automatically. (And, presumably, without any need for human judgment or adjustment.)

We then heard from a representative of one of the Big 4 firms, who was quite enthusiastic about their AI tools, developed in-house, that were (for example) able to take the papers in a Due Diligence Deal Room, containing 500-1,000 documents, and devise an action plan in 1 or 2 minutes, that was “about 80% of the way there”, and that would previously have taken a dedicated team of human practitioners a week to compile. Broadly timewise, he anticipated that Chat-GPT 5.0 would offer a 100-fold improvement on Chat-GPT 4.0, just as Chat-GPT 4.0 had demonstrated a 100-fold improvement on Chat-GPT 3.0. He was a keen advocate for closed models that used only internal and carefully-curated (proprietary) libraries, rather than an open model that may have been trained on any and all old tat on the wild-wide web, like Reddit.

I found solace in the final speaker, who pointed out that even internally-curated or “closed” models would still hallucinate – pass garbage off as fact (or, say, rely on previous tax case judgments that never actually happened, as per Harber v HMRC [2023] UKFTT 01007 (TC)). She was notably less evangelical about AI, and happy to acknowledge the various weaknesses in existing models. But she was clear too about the overall direction of travel: it is coming, and it will be transformative. For example, many AI tools already include a Generative Adversarial Network (GAN) that basically uses fresh AI against the original AI as part of the same tool – a kind of internal competition – to monitor and “check” answers, spot inconsistencies, and hopefully weed out those pesky hallucinations.

One of the interesting points raised in the talk was that in sports automation, VAR decisions have been widely ridiculed for obvious howlers, but some of those were in truth more reflective of bad rule-making in the first place, rather than VAR’s implementation/interpretation of them. Will AI’s more literal/dogmatic interpretation of legislation reveal flaws in the legislation – or its conventional interpretation – that humans have, in effect, glossed over for years?

Almost certainly, I should say.

But in this writer’s opinion, the real dilemma is whether they will then be identified and highlighted as issues that need to be fixed, or simply assimilated as the “new normal” without a backward glance.

What of the unfairness revealed and rectified in Lobler v HMRC [2015] UKUT 0152, or the several reversals in the long-standing received wisdom of CGT main residence relief typified in Higgins v HMRC [2019] EWCA Civ 1860, and Lee v HMRC [2023] UKUT 242 (TCC)? Would these have gotten to court if the expert AI had simply said “this is the way things work” or, if they had gotten to court, would the AI-assisted judge had simply and easily concluded “this is the way it’s always been”?

One or two other points raised in the talk, with which I might also take issue:

  1. A proposal that legislation should be written more like computer code in future

It already is, and that is obvious when you pick up any Finance Act from roughly the last decade – its length is a surefire giveaway, if nothing else. One might wonder whether it was drafted by people who were tax experts with a knowledge of coding, or coders with a modicum of tax knowledge. Either way, new legislation – particularly anti-avoidance – is often incrementally, exhaustingly verbose to the point of being unreadable. I look forward to seeing Finance Bill 2026 being debated by Parliament in hexadecimal.

Apart from which, if AI is supposed to be the new super-clever hotness, why on Earth do we have to move away from natural language in our own legislation to help AI understand it? If AI is not actually clever enough (yet?) to reason through the legislation as it already stands, warts and all, then… maybe give us a call when it is?

  1. The new seamless integration of tax so that “it just happens” will make tax returns redundant

Cobblers. See MTD. Basically, everything is turning into a return. Or even more of a return than was originally sold to us (e.g., PAYE RTI submissions proposedly to include employee hours and locations). Perhaps the real issue is whether – or where – in the “submissions” process the taxpayer gets the opportunity to amend, annotate or even appeal, in such a heavily automated system. One that is designed, from the ground up, to minimise taxpayer intervention, so far as possible. Remember: computers are good, and HMRC is fair, but taxpayers are doubleplusbad. Not necessarily because they make mistakes but because, sometimes, they want to ask a question!

  1. Given that tax authorities are getting so much data and may soon able to automate the assessment process from end to end, will there still be a need for Self Assessment – for the taxpayer to take primary responsibility for their own tax liability being correct?

This was mooted by a CTA (in fact, all the speakers were CTAs). Who then supposed HMRC was unlikely to give up on Self Assessment. At risk of revealing I am as old as Time, I was around when kindly old Hector was reassuring us that moving to Self Assessment was entirely reasonable, because we would get faster/greater certainty over our tax affairs. Without dwelling overmuch on the shenanigans that have transpired since then, I should rank Self-Assessment as roughly the second-greatest trick the Devil ever pulled. There’s about as much chance of HMRC giving up the loaded dice of Self-Assessment as there is of Sauron saying, “Sure, keep the ring: I never really wanted it anyway”.

But enough about what I think: watch it yourselves (or maybe even get your own AIs to watch it for you).


One Month in a Minute

Explore more tax articles

See more