IDU Conference: 13-15 May 2026

Sign up

Thought Leadership

Accountants’ role is pivotal in an AI world

Accountants’ role is pivotal in an AI world

As the use of artificial intelligence picks up speed, our profession’s expertise isn’t becoming less relevant – it’s becoming the thing that makes AI safe to use.

In my previous article, I looked at how fast AI is moving and why that speed makes two of our default responses, incremental adoption and traditional training, inadequate. But speed isn't the whole story. Even if you could keep pace with the technology, you'd still face a set of challenges: your people are engaging with AI in very different ways and with very different levels of confidence, the tools themselves carry biases we're not used to looking out for, and knowing where to intentionally build friction into the process is becoming a core professional skill.

These are three of the things I think we need to get right.

1. You have a patchwork of capabilities and attitudes in your organisation

Don't make the mistake of assuming everyone is reacting to AI in the same way that you are. There are undoubtedly cohorts in your organisation that are enthusiastically using AI today, likely without any oversight in place. Some of this group will have the nous and inclination to think analytically and critically about the outputs. They will see AI as a tool, challenging its results and using judgment about how it is deployed.

Another part of this group, though, is doing the opposite. They're blindly accepting what AI says, cutting and pasting the output without any critical thinking applied. (They are also likely being praised for how much faster they are working.)

There’s also a third group. They think AI is scary, too complicated, and coming for their jobs. They are doubling down on the repetitive and the mundane manual work that you’d prefer they didn’t do, because this is comfortable and familiar.

How do you, as a business leader, manage this complexity? First, don’t make the mistake of simplistically mapping these cohorts onto generations in the workplace. Don't assume that the Gen X elder, with their breadth of knowledge and experience, is avoiding AI. And don't assume that your Gen Z junior is magically tapped into the AI hivemind.

What I've seen work is harnessing the early adopters, but with appropriate checkpoints to keep their enthusiasm intentional and compliant. And alongside that, finding ways to reassure and bring along the more fearful and resistant among your team, because shutting them out only deepens the divide.

2. Bias has just scaled

We saw this happen with social media, but it's happening at a greater scale and more insidiously with AI. Because AI learns from you and about you, and it wants to hook you in and keep you coming back, it amplifies and reflects what you already think. This obviously has implications politically and socially, but also impacts approaches to work, problem-solving, creativity, strategy, and governance.

For accountants, this plays out in ways we might not immediately recognise. First, there's tactical bias. If you've spent your career advising SMEs on tax planning, AI will lean into SME-centric approaches and may miss structures or reliefs that would be obvious to someone working with larger corporates. AI makes you more efficient at your existing approach without ever questioning whether it's the right one.

Dispositional bias is a bit harder to spot. AI picks up on your professional temperament and reflects it back to you. This means that a naturally cautious accountant will get conservative forecasts, conservative risk assessments, conservative advice. But a more bullish colleague will get the opposite. Neither will be challenged, because the AI is optimising for what resonates with you, not necessarily what's accurate or intended. You feel like you're getting independent validation of your thinking, when you're actually getting a mirror.

AI also carries its own blind spots. Its training data skews towards certain markets, regulatory frameworks, and business models. UK accountants asking for guidance may get responses subtly shaped by US GAAP thinking rather than IFRS, or weighted towards listed company scenarios when advising owner-managed businesses. The AI won't flag this. It will just sound confident.

For instance, when tackling a work challenge, the result my co-founder and I get from AI will vary vastly. Two heads are better than one has never been truer if you want to neutralise bias and avoid single-track thinking, especially when AI takes action on our behalf.

A key takeaway is that AI is not your friend, even if it's friendly. And AI is not neutral.

3. Strategic friction is good

This might sound contradictory given how much I talk about using AI to navigate constant, accelerating change. But strategic friction is essential to ensure quality outputs. Two starting points are the creation of guardrails and checkpoints.

Guardrails are pre-set boundaries that prevent AI from overstepping. This is increasingly important as agentic AI starts to take actions on our behalf. Guardrails could look like limits on the value of journal entries AI can auto-approve, or restricting which data sources AI can pull from to protect confidential client information.

Checkpoints, on the other hand, are milestones where humans have to step in to review, validate, and course correct. For instance, reviewing an AI-generated cash flow forecast before it goes to the board, or verifying AI-categorised transactions are coded correctly, and, probably most importantly, handling sensitive conversations with clients.

In short, guardrails stop AI from going off-piste, and checkpoints are where humans step in to confirm the route is right. Both are essential, but they operate at different stages: guardrails are preventive, checkpoints are evaluative.

Accountants are, in many ways, ideally placed to lead on this. We already think in terms of audit trails, segregation of duties, and materiality thresholds. We build structured oversight into financial processes as a matter of course. These are guardrails and checkpoints by another name. The discipline that underpins our profession is exactly what AI workflows need, which means that designing the human-machine interface in your organisation may be one of the most natural extensions of what we already do.

This strategic friction ensures that the colleague who is up against the clock doesn't just accept what AI tells them. It makes sure that the invisible decisions AI makes are surfaced and understood before being implemented. And hopefully, this discipline reassures sceptical humans of their value and makes clear that AI and humans working together lead to better outcomes. For accountants and the finance function, this step is especially important to ensure governance, compliance, and standards are always met.

Succeeding as accountants in an AI world

There's an understandable fear that AI diminishes the accountant's role. But I'd argue the opposite is happening. What we should be building towards is AI that assists, not replaces, because the more AI does, the more critical professional oversight becomes. AI-generated forecasts need someone who understands the assumptions behind them and automated journal entries need a framework that ensures they meet professional standards. Crucially, every AI-driven recommendation to a client needs a human who can weigh the context that the technology can't see.

Our profession's deep expertise in governance, compliance, and structured decision-making isn't becoming less relevant in an AI world. It's becoming the thing that makes AI safe to use. Harness your early adopters, work with them to develop guardrails and checkpoints, and embed the critical thinking skills to challenge and iterate with AI. And never sit still. This steam train is very soon going to be a jet plane, and then a rocket ship.

AI is the ultimate "do more with less" opportunity. Let's make sure we're doing the right "more".

As published AccountingWeb - April 2026

Accountants’ role is pivotal in an AI world | IDU