Jul 27, 2025
Jul 29, 2025

Fire, AI, and the Illusion of Progress

AI is powerful, but without structure, strategy, and empowered people, it can backfire. Learn how to turn hype into real value by using AI with purpose and care.

Join our Data Community

Get the latest news in your inbox

Thank you! You are now subscribed to our news.
Oops! Something went wrong while submitting the form.

Follow us

“Fire is one of the greatest discoveries of our civilisation. It cooked our food, kept us warm, and helped us evolve. But handled carelessly, it burns down homes, forests, and lives. It’s a gift. And a liability.”

Brianna Wiest, 101 Essays That Will Change the Way You Think

That quote stayed with me when I heard it last week while cycling to work. Fire changed the course of humanity, but only when harnessed properly. Left untamed, it causes devastation.

AI feels like our modern-day fire. It’s powerful, transformative, and increasingly available. But its real impact (good or bad) depends entirely on how we use it.

We’ve been here before.

The Cycle of Hype

Every decade, a new technology arrives that promises to solve everything, these are some I have encountered during my career:

  • Robotic Process Automation (RPA) was positioned as the silver bullet for process efficiency. In theory, bots would replace repetitive tasks and free up human potential. In practice, many companies just automated inefficient processes instead of fixing them.
  • Low-code/no-code platforms aimed to democratise software development. But they often created isolated tools without long-term sustainability or alignment with enterprise architecture. Builder.AI’s recent collapse serves as a stark reminder that market hype alone offers no guarantee of resilience or long-term value.
  • Excel/VBA(Visual Basic for Applications): not a new revolution, but a familiar trap (some of which I built earlier in my career). Organisations still rely on fragile tools built by individuals, undocumented and now untouchable because the creators have long since left. “It works, so don’t touch it” has become a dangerous motto.

These patterns repeat. The promise is always the same: this will fix everything. The outcome, unless handled with care, is rarely as transformative.

How AI Differs from Past Tech Hype

Reading historian Yuval Noah Harari, in his book Nexus, he compares AI to the advent of the printing press.

The printing press didn’t just speed up the copying of books; it transformed societies by democratising knowledge, reshaping economies, politics, and culture in ways previously unimaginable.

AI is poised to be just as transformative. It won’t simply automate tasks or process information faster, it will fundamentally change how knowledge is created, shared, and applied. AI challenges our traditional decision-making processes and the very structure of work itself. Like the printing press, it promises to reshape the world as we know it, ushering in new opportunities and risks.

This is more than a technological upgrade. It’s a societal revolution.

The real challenge is not the technology

The technology isn’t the problem. The problem is how we choose to use it and how we decide what to use it for.

AI use cases are too often selected by intuition or enthusiasm. Teams gather in workshops and come up with “cool ideas” instead of focusing on where the real friction is.

What is needed is quantification:

“We spend 30 minutes reviewing customer emails every day. Over a year, that’s more than 120 hours. If GenAI could draft initial responses in just 5 minutes, we’d free up over 100 hours annually for more valuable work.”

To uncover these high-impact areas, start by mapping existing processes, identifying bottlenecks, and interviewing front-line staff about their most time-consuming or error-prone tasks. This bottom-up approach ensures AI is applied where it can deliver tangible business value, not just speculative novelty.

But even when we identify the right problems, the journey to a successful AI solution faces many dangers. Like fire, AI needs structure and oversight, or it risks causing more harm than good

Why AI Governance and Guardrails Matter

AI, like fire, requires careful handling to avoid causing damage. AI brings risks that many organisations are still not prepared for, especially in these areas:

  • Data quality: Garbage in, garbage out. AI trained on flawed, biased, or outdated data will produce flawed, biased, or misleading outputs, at scale.
  • Ethics and bias: AI models can reinforce harmful stereotypes or exclude entire populations if not built and governed properly. This isn’t theoretical, it is already happening in hiring tools (see recent lawsuit against Workday), credit scoring models, and healthcare applications.
  • Governance: Establishing clear lines of accountability and oversight is paramount. Who reviews AI-generated outputs? Who is accountable for decisions made by AI? How do you audit a black-box model that updates constantly to ensure transparency and fairness?
  • Change enablement and training: Deploying AI tools without comprehensive change management and training is like handing someone fire without guidance, it risks misuse or being ignored. Organisations must invest in teaching teams not only how to operate AI, but when and why to use it, interpret outputs critically, understand ethical implications, and adapt workflows. Continuous learning is essential for sustainable AI adoption

Without clear accountability, ethical oversight, and people enabled to use AI effectively, organisations risk building brittle systems that do more harm than good.

People First, Tools Second

As the history of technology repeatedly demonstrates, true transformation isn't driven by the tools themselves, but by how we choose to use them

Whether it is fire, the printing press, RPA, or AI, the pattern is clear: breakthrough tools only change the world when people and systems evolve with them. Otherwise, we’re just lighting matches and hoping for the best.

AI can transform work. But transformation will only happen when it is used with intention, backed by clear use cases, structured governance, and teams empowered to adopt change, not resist it.

It’s not what the tool can do. It’s what you enable people to do with it, and what you allow them to stop doing.

------------------------------------------------------------------------------------------

A note on creation: This blog post was developed collaboratively, with AI tools assisting in structuring and refining the text. It serves as an example of how AI can augment, rather than , human thought and creativity.