top of page

We can't stop talking about AI

  • 5 hours ago
  • 4 min read

Kevin Phillips considers what it means when AI starts building itself, and why the profession's default responses aren't going to cut it.


I know. You probably have AI fatigue already, too. But at the start of February, we reached a significant landmark in AI’s development that makes ignoring it impossible. Two of the leading frontier AI developers, OpenAI and Anthropic, released major new models that contributed to their own development. That’s right, ChatGPT and Claude were instrumental in building their own latest releases!


OpenAI said that GPT-5.3-Codex was “instrumental in creating itself.” It was used to debug training, manage deployments, and analyse tests and evaluations. This is significant. In fact, OpenAI says that Codex can nearly do anything its human developers can do on a computer.


And Anthropic, which released Claude Opus 4.6, said that AI is writing much of the code and is speeding up the release of new models. Anthropic CEO, Dario Amodei, predicts that we're only a year or two away from the current generation of AI autonomously building the next.


Consider this: the gap between GPT-4, the model that first made the world sit up and take notice, and the model that helped build itself was less than three years. By comparison, it took the smartphone roughly a decade to go from novelty item to necessity. AI isn't following the technology adoption curves we're used to, instead it's compressing what used to take years into months, and months into weeks. This means that by the time most businesses have written their AI strategy document, the technology it describes has already moved on.


We can't ignore this, and anyone still saying “AI needs at least another 5 to 10 years” is not paying attention. We need to talk about what this means for our profession, our people, and our future.


And what makes this even harder to navigate is that your people aren't in the same place. Some are racing ahead with AI, for better or worse. Others are digging in, hoping it goes away. This patchwork of attitudes and capabilities across your organisation is something I'll explore in detail next month, but it's worth flagging now because it shapes everything that follows.


To make sense of where we are with AI today, I'm going to explore five things I'm thinking about across two articles. This month, let's start with two topics that are directly tied to the speed problem: why an incremental approach to AI adoption no longer cuts it, and what to do when traditional training can't keep up. Next month, I'll tackle the messier human challenges including organisational resistance, bias, and where strategic friction fits in.


1.       The time for dipping a toe in the AI water is over

Hopefully, if a few years ago you were preaching wait and see, you've moved on from that. Which means you might be taking an incremental, dip your toe in the water approach to AI today. This can look like using AI for an hour or so a day to get a feel for it or looking at which aspects of your business could provide a proof of concept for automation. Maybe you're drafting emails or extracting action points from meetings using AI today.


There's nothing wrong with these tactics, but they miss the big picture that AI is here, AI is starting to take actions on our behalf, and AI's true value is when it's used systemically, not piecemeal. Think of it like this: taking an incremental approach to AI is like deciding at the start of the First Industrial Revolution to use steam trains to transport only some of your new materials or products and sending the rest by horse and cart. The speedy arrivals will languish in storage while the remainder catches up. Likewise, your systems and workflows are only as fast as their slowest link.


2.       Training as we know it is obsolete

With today's accelerating speed of change, how do you provide training for your people? You can't just send them on a course anymore. Anything they learn will be out of date before they get their certificate of completion. So how do we support ourselves and our teams with a change this big?


There are two ideas that I'm considering.


What I've found matters most is supporting our people with the critical thinking skills to challenge AI's glib, polished-sounding answers. Think of it this way: most of us can look at a spreadsheet and tell whether the answer looks right. Fewer of us can pull apart the methodology behind it and spot if and where the logic breaks down. That's the skill set we need for AI. Not just checking the output but understanding how it got there.

And secondly, we could use AI to train ourselves on it. I've started doing this by asking AI to explain its own reasoning, stress-testing its outputs by feeding them back, and deliberately challenging the logic. It's the only way to keep pace with the speed of change, and it builds exactly the kind of critical muscle I mentioned above.


Speed is only half the problem

The speed of AI's development isn't going to slow down for us. Neither an incremental approach nor traditional training will equip your business to keep up. But speed is only half the challenge. Next month, I'll look at what happens when you add human complexity to the mix, including the patchwork of attitudes across your teams, the biases AI amplifies, and why building the right friction into your AI processes might be the most important thing accountants can do right now.


 

 
 
bottom of page