Blog

Mar 16, 2026

Is Your AI Actually Under Control?

AI governance for board directors: how to assess whether your organisation's AI decisions are truly controlled, auditable, and aligned with your risk appetite.

Join our Data Community

Get the latest news in your inbox

Thank you! You are now subscribed to our news.
Oops! Something went wrong while submitting the form.

Follow us

One Big Question is a monthly newsletter for directors and executive leaders navigating data, AI, digital transformation, and digital sustainability. Each edition is written by a governance leader or senior decision-maker and grounded in real boardroom experience and organisational accountability from New Zealand and internationally.

Issue 2 | March 2026 | by Monica Richter

Most boards are already good at pressing the general question “What’s our AI strategy?”

Yet the one BIG question should really be: “Can we evidence that our AI and data decisions are under control, auditable, and aligned with our risk appetite today?”

This question goes beyond roadmaps and scattered pilots. It pushes management to show proof that AI and data are governed with the same discipline as financial reporting or cyber risk, not just talked about in vision decks.

Regulators and investors are rapidly raising expectations on data and AI governance, and boards are on the hook whether or not they feel ready. AI rules are proliferating across jurisdictions, and the EU AI Act is becoming a global reference point for “high risk” systems, documentation standards, and enforcement expectations.

At the same time, more boards are formally assigning AI oversight to a committee, often the audit or risk committee, signaling that AI failures will increasingly be treated as governance failures, not just technology glitches. Yet surveys consistently show that only a minority of companies have mature AI governance frameworks or clear metrics for oversight, creating a widening gap between expectations and reality.

What “under control” really looks like

For AI and data, “under control” does not mean zero incidents. It means credible systems for identifying, escalating, fixing, and learning from issues at speed, while being able to demonstrate what was done and why.

Four elements are emerging as common expectations:

1. Clear inventory of AI and high-risk models

Boards should expect management to know where AI is used, which use cases are critical or “high risk,” and who owns each system across its lifecycle, from design and training through deployment, monitoring, and retirement.

Without a living inventory, there is no way to manage cumulative risk, prioritize oversight, or respond quickly and confidently to regulatory or media questions.

2. Defined governance and accountability

Leading practices include formal AI governance structures, integrated with existing risk, compliance, model risk, and information security functions.

For boards, the key is clarity. Which committee owns primary oversight? How do management committees coordinate? Who signs off on high risk use cases, and what criteria do they use?

Increasingly, audit, risk, or technology committees are explicitly chartered with AI oversight and coordination, ensuring that AI is not treated as an isolated innovation project but as part of the core control environment.

3. Data quality and lineage for key decisions

Regulators have long emphasized that reliable risk and business decisions depend on robust data management, including quality, aggregation, and traceability. Those same expectations now extend to AI.

Boards should understand whether foundational data issues such as silos, inconsistent definitions, incomplete or biased data, and poor timeliness are being systematically addressed with governance, investment, and accountability.

If critical AI driven decisions are being made on data that management privately views as “good enough,” the board should challenge why that is acceptable against the stated risk appetite.

4. Evidentiary readiness

A critical shift is moving from “Do we have AI controls?” to “Can we prove how and why a system behaved the way it did?” Evidence means logs, testing records, approvals, model documentation, monitoring results, overrides, and remediation actions that can be produced on demand for regulators, auditors, customers, or courts.

At scale, this requires tooling and structured, documented processes, not reactive one off investigations every time something goes wrong. Boards should test this by asking for a recent example and reviewing the actual evidence, not just a description of the process.

Potential board blind spots

Most boards now see polished presentations on AI opportunity; far fewer see a clear picture of operational reality. These blind spots show up repeatedly:

- Fragmented oversight

Different committees may touch AI, including analytics, audit, risk, technology, data, and even compensation, yet no one pulls together a coherent view of risks, controls, and performance.

This fragmentation can leave gaps in accountability for areas like third party AI, bias, model drift, or safety. The board should understand who is responsible for stitching together that integrated view and how often it is reviewed.

- Backward looking reporting

Many boards still receive quarterly, static risk reports, while the risks created by AI are increasingly real time and data driven. That lag makes it hard to spot emerging issues or assess whether mitigation plans are working.

Directors should push for leading indicators and trigger based escalation. What thresholds, anomalies, or events cause AI related issues to be reported between regular meetings?

- Overconfidence in innovation narratives

Some management teams still frame governance as a brake on innovation. In practice, companies that scale AI successfully treat robust AI and data governance as a precondition for innovation at scale, not an obstacle.

Boards should be wary of aggressive growth stories that are not accompanied by equally robust controls, talent, and investment in data foundations.

Testing the one BIG question

Using the central question “Can we evidence that AI and data decisions are under control and aligned with our risk appetite today?” directors can drill into specifics such as:

  • “Show us the current inventory of AI and high risk models. Which are most critical, and who is accountable for each?”
  • “How is AI integrated into our existing risk management and internal control frameworks, including model risk and cyber?”
  • “What metrics do we use to monitor AI performance, drift, incidents, and remediation? How often, and with what triggers, do they reach the board?”
  • “If a regulator, key client, or court asked us to explain a controversial AI decision next week, what specific evidence could we produce within 48 hours?”
  • “How does our current practice compare to emerging standards and frameworks, for example ISO/IEC 42001, sector guidance, and local regulatory expectations in our key markets?”

A simple illustration makes this concrete:

Consider a bank deploying AI for credit decisions. To credibly claim control, it should be able to show the data sources used and the quality controls applied to them, model validation and monitoring results over time, fairness and bias testing including how thresholds were set and reviewed, defined override and escalation processes, and a clear record of approvals, changes, and decommissioning decisions.

All of this should be tied to named owners and documented within the bank’s governance and risk systems, not scattered across emails and slide decks.

Director’s take away

At your next meeting, ask: “If we were challenged tomorrow by a regulator, investor, or court, could we clearly evidence that our AI and critical data decisions are controlled, auditable, and aligned with our risk appetite?”

Then press for concrete inventories, metrics, and documentation, not just strategy slides or demo videos. Insist that AI and data governance be treated as core board business, on par with financial reporting and cyber risk.

If the evidence is thin or scattered, the board has a clear mandate: upgrade oversight before AI driven decisions outpace the organization’s ability to control them.

About the author:

Monica Richter is a global board director, strategic advisor, and former Chief Data Officer with 25+ years leading data strategy, governance, and risk in regulated environments. As CDO at Dun & Bradstreet and SVP at S&P Global Ratings, she operated at the nexus of data, regulation, analytics, and board accountability — building and remediating enterprise data functions under scrutiny.

Today, as Executive in Residence at ModOp Strategic Consulting, she advises on data/AI governance, risk alignment, monetization, and evidentiary readiness. Her guidance stems from hands-on experience, not theory.