top of page

AI: The new calculator panic

  • IDU
  • 7 minutes ago
  • 5 min read

They said calculators would dumb us down. Does that sound familiar?

The last time you sent someone a balance sheet, did you also have to send a handwritten note showing your arithmetic to prove you did the calculations yourself? And the last time you sent a proposal, did you have to somehow show you independently knew how to spell all the words in the document? Or worse yet, did you leave in a typo or two to show that you didn’t “cheat” with spellcheck?

Of course you didn’t, and these are clearly laughable scenarios.


The AI fallacy

Yet, why are we treating artificial intelligence, probably the most significant workplace tool since the internet, in this way? We’ve all seen the smug posts on LinkedIn: “I can tell you used AI because…” with em-dashes the number 1 offender. The irony that AI naysayers seem to be missing is that AI learns from us, and holds up a mirror to humanity. Those em-dashes? Emily Dickinson, F. Scott Fitzgerald, and Herman Melville adored an em-dash. Buzzword overload? Have you read some of the B2B thought leadership from the pre-AI days? Especially in tech. And that uniform structure? You want your content to be found and convert, right? Best you follow the best practices for writing for the internet.


Now I’m not excusing poorly written, uninspired content. Thought leaders need to live up to their name. But AI is not the villain here, and it can’t be helpful to throw the baby out with the bathwater. I experienced this firsthand recently, when an article of mine, ironically enough, on workplace efficiency in a changing world, was called out as AI-generated (not by this publication!) The ideas were mine. The angles were mine. The research was mine. The examples were from my business experiences. I used AI for some of the writing as, ahem, this was more efficient. And yet, ironically again, an AI tool flagged it, and I lost all my efficiency gains in time spent smudging the edges and making the writing meet today’s criteria for “human”. Or at least, not AI.


Training the next generation of accountants

As well as on LinkedIn, a place where this panic has played out is in universities, as academics and admin struggle to adjust in an AI world, where students see it as commonplace as spell check.


The initial reaction of many universities was to do everything in their power to keep their students away from AI. This included using faulty AI detectors and tools to track natural typing patterns, rather than updating their curricula and pedagogy to embrace AI and better prepare students for success.


The argument seemed to go, well if you just use AI for everything, you’ll never understand the principles at stake. If you can use ChatGPT to write you a passing essay on the drivers of the Industrial Revolution, will you ever grasp them yourself? Or will you just pass with little effort?


But this is only looking at part of the picture, and it is not setting up the future workforce for success. This argument doesn’t only apply to universities, of course. If you are forcing your team to do things manually that AI could do in a heartbeat, the same applies.


Editing, and writing

I recently had to write a farewell speech for our first-ever employee, who, after 27 years, was retiring. This is not a milestone you think about as an entrepreneur when you set out. So, obviously, I turned to CoPilot to draft something. With a three-line prompt it developed a more than adequate speech in seconds. I could have presented it then and there.


But I didn’t. And this is the part that too many people seem to be missing, probably because when we were sold AI as making us more productive, we heard that it would make us faster. And while it definitely does, it’s not how we expect.


Instead of presenting that first draft of the speech, I thought about it for a bit, and let it percolate. While the speech was adequate, it wasn’t good enough. And it certainly wasn’t what I thought my first employee deserved. Unsurprisingly, it lacked emotion, colour, and human connection. So I re-briefed CoPilot, several times, adding anecdotes, emotion, and nuance that there is no way AI could have come up with by itself. After this editing process, I ended up with a speech I was very happy with. And while it didn’t take a few minutes, it certainly took less time than if I were working alone.


Teach critical thinking, not panic

I’d argue that instead of resisting AI, we should be teaching students and our employees how to grapple with AI critically, how to use it as a learning tool, and how to think with it, not avoid it. We should be learning how to use AI the same way we learn other core skills, such as research methods or data literacy. Banning it doesn’t stop its use, it just ensures students and employees use it badly and on the down low. People who have learned to interrogate AI, to test its reasoning and spot its limits and hallucinations, will add human judgment to machine speed and do better work. These are the people I want to hire.


What about us accountants?

I’m old enough to remember when there was a panic over calculators in the classroom. “You won’t be allowed to use them in exams,” we were told. (We soon were.) “Do you think you’ll be walking around with a calculator in your pocket?” we were asked. (We all are, and it’s a far more sophisticated calculator than the basic one we had at school.)


The value in what we bring to our clients is not our arithmetic ability. Shocking, I know. Our clients value that we know what to do with the numbers, how to stay compliant, our instinct (born of experience and knowledge) to spot when things look wrong, and our ability to sit back and decide what the numbers mean. They don’t care about whether we can replicate an Excel formula in long-hand on a napkin to show our workings.


The white collar panic

The current panic over AI is not surprising. It’s the machines coming for the white-collar jobs, just as they did for blue-collar jobs with the rise of robotics. And undoubtedly jobs will disappear, change, and evolve with very real consequences for real people. But AI is here, we can’t fight it or ignore it. In fact, we should encourage it.


What we should also do is ensure the outcomes are net positive by engaging with it, upskilling ourselves, evolving our roles, having realistic expectations, and most importantly, remembering the value that our human brains bring to the partnership.

This seems a far better use of our time than doubling down on legacy ways of doing things, or inserting deliberate tyops and messiness into documents to prove we didn’t use AI.


bottom of page