Before You Read On: Join a free AGRC-LGCA webinar held by Olga Solovyeva and Brook Horowitz, ‘Emerging Technology Risks for Business: What Leaders Need to Know’ on 25 February 2026. This is a practical briefing for business leaders focusing on governance, accountability, and effective oversight of AI and emerging technology risks in a rapidly evolving regulatory landscape. Sign up here.

Brook and Olga also run training with AGRC partner the London Governance and Compliance Academy (LGCA) on responsible tech culture and information integrity for finance and compliance professionals. More details here.


We are living through an AI race – one in which companies around the world race to adopt, deploy and monetise artificial intelligence. Innovation cycles move at breakneck speed, new platforms and capabilities emerge monthly, and the business upside seems almost boundless. Yet this rush comes with deep uncertainty. AI remains by and large the least-regulated major technology. That, however, is changing fast. With new regulatory regimes, standards and governance expectations coming online, organisations that treat AI as ‘just another IT project’ risk being blindsided by compliance, reputational and governance failures.

As regulators and standards-bodies catch up, boards, risk teams and leadership must confront a stark reality: the AI you build today may expose you to serious liability tomorrow. Many board members, faced with this uncertainty, take the path of least resistance and block AI projects entirely. Yet this veto approach carries its own risk – it can severely limit a company’s ability to innovate and compete, and it merely postpones the governance challenge rather than solving it.

The smart question isn’t ‘should we do AI?’ but rather ‘how do we do AI responsibly and resiliently?’

The Regulatory Shift

For now, much of the AI landscape remains in the realm of self-regulation and best practice. But regulation is no longer a theoretical future, it is rapidly taking shape.

The ISO/IEC 42001 standard, published in 2023, is the first global management-system standard for AI. It provides a structured framework to govern AI across its lifecycle: from design and data governance to risk assessment, impact evaluation, supplier oversight and continuous monitoring.

Meanwhile, the EU Artificial Intelligence Act (AI Act), which came into force in August 2024, is the first comprehensive law regulating AI at scale in any jurisdiction. It adopts a risk-based approach, outlawing ‘unacceptable-risk’ uses, imposing stricter requirements on ‘high-risk’ AI systems, and requiring transparency, human oversight, and documentation.

Taken together, ISO 42001 and the EU AI Act mark a turning point. AI is no longer fringe; it’s becoming a regulated domain, and organisations must adapt their governance accordingly.

Yet, at present, many companies remain unprepared. That’s because while the regulatory and standards environment is evolving, so too are the types of AI and the risks they carry.

Not All AI Is the Same – and Neither Are the Risks

Part of the complexity stems from the fact that AI is not a singular thing. Different types of AI bring very different risk profiles and governance challenges. Some of the main categories:

Generative AI – language models, image/video synthesis, code generation, content creation. These raise issues of copyright, misinformation, deepfakes, content integrity, data bias, and misuse.

Agentic / autonomous AI – AI that can act, make decisions and perform tasks with minimal human oversight (e.g., autonomous agents, decision-support systems, automated workflows). These introduce risks around decision accuracy, accountability, auditability, and unintended consequences.

Traditional AI / algorithmic systems — predictive analytics, scoring, recommendation engines, risk models, data-driven decisioning. These carry long-known risks around bias, fairness, privacy, data governance, transparency, and regulatory compliance (e.g., consumer protection, data protection laws).

Each category demands different approaches. For example, generative AI requires robust content-integrity oversight and media-forensics capabilities; autonomous agentic systems need strong audit trails, human-in-loop governance, lifecycle risk management; conventional algorithmic systems need rigorous testing, bias mitigation, data governance, regulatory compliance frameworks.

But many organisations treat them all as AI with one blanket governance or risk policy. That’s insufficient, and potentially dangerous.

Three Strategic Responses

Depending on a company’s size, core AI exposure, and strategic orientation, there are broadly three strategic responses to this phase of AI risk and compliance. Most organisations will start with the first, progress to the second as they scale, and eventually adopt the third if AI becomes core to their business model.

1. Foundational Compliance Posture

For organisations just starting with AI or considering it.

  • Establish an AI governance framework aligned with ISO 42001 (or equivalent)
  • Conduct thorough classification of use-cases (generative, agentic, analytical), map risk exposure
  • Set up internal documentation, lifecycle management, supplier oversight, impact assessments, human-in-the-loop policies

This posture ensures minimal exposure to regulatory and compliance risk, places you ahead of potential enforcement, and provides a credible basis for future AI – without rushing into full adoption.

2. Operational Risk & Integrity Posture

For organisations already using AI, especially generative or publicly visible AI – or organisations where brand, trust, compliance, privacy, content integrity matter (media companies, public sector, consumer services).

  • Introduce regular ‘readiness drills’: content-integrity simulations, deep-fake / misinformation threat assessments, rapid-response protocols
  • Combine technical AI governance with communication, crisis, and media-forensics capabilities
  • Embed decision-rights, escalation workflows, and accountability mechanisms for AI output and misuse risks

This posture acknowledges that regulation will not stop risk entirely, but helps organisations respond swiftly, credibly, and with minimal fallout.

3. Strategic Governance & Competitive Posture

For organisations that want to leverage AI fully, but with controls baked in so they can stay ahead of regulation, build stakeholder trust, and create long-term value.

  • Adopt a full management-system approach (e.g., ISO 42001), integrate AI governance into enterprise risk and compliance frameworks
  • Maintain ongoing risk assessments, supplier and data governance, auditability, lifecycle reviews – not just project-by-project
  • Build transparency, documentation, human-oversight and reporting – preparing for external regulation, internal accountability, and stakeholder (customer / regulator / board) scrutiny

This posture turns compliance into a competitive advantage: trust becomes a differentiator; responsible AI becomes a selling point.

Your Action Plan: Getting Started

Regardless of which posture fits your organisation, here’s how to begin:

Map your AI landscape – Identify all current and planned AI use-cases. Classify them as generative, agentic, analytic, decision-support, R&D, etc. Understand what you’re actually dealing with before building governance around it.

Benchmark your governance – Assess your current framework against ISO 42001 and EU AI Act obligations (or similar for non-EU jurisdictions). Identify gaps in documentation, oversight, risk assessment, and supplier management.

Establish clear accountability – Create an AI Risk & Integrity Committee or designate responsible roles with clear decision rights, escalation procedures, and accountability. AI governance cannot be diffused across the organisation without ownership.

Test your readiness – Run an AI integrity or crisis simulation. Test team response to content verification challenges, deepfake scenarios, misinformation handling, and reputational risk. Find the gaps before a real crisis does.

Build continuous oversight – Design and implement an ongoing AI governance programme aligned with a management standard like ISO 42001. Embed lifecycle oversight, documentation requirements, regular audits, and periodic reviews into operations – not just one-off compliance checks.

Why This Matters Now

In the current climate, organisations that treat AI governance as optional or ‘nice to have’ may find themselves playing catch-up when regulation, audits, or crises hit. Compliance regimes like the EU AI Act already apply, and international standards such as ISO 42001 give regulators and auditors a clear yardstick to measure safety, governance, and accountability.

Failing to act now means risking:

  • Regulatory penalties or forced shutdowns of AI projects
  • Severe reputational and brand damage (especially for generative / agentic systems)
  • Loss of board or stakeholder trust, and inability to scale AI productively when needed

Conversely, acting now – with structure, discipline, and foresight – lets companies treat AI not as a gamble, but as a managed strategic asset.

Conclusion

The AI race is on. But so is the compliance and governance countdown. As regulation, standards and public scrutiny tighten, organisations have a narrow window to build the systems, processes, and culture that will allow them to use AI safely and profitably.

Companies that merely chase innovation without governance will likely pay the price. The smart ones will adopt a strategic, risk-aware, integrity-first posture: blending compliance, governance and readiness with business ambition.

Now is the time to ask the hard questions, map the risks, and build your foundation – before AI governance becomes a liability, not an opportunity.


Olga Solovyeva, PhD, is an expert in digital risk, AI governance, and tech communication. She is a published author with a PhD in Responsible Tech Business and held academic positions before switching to full-time training and consultancy as a Director of Digital Risk and AI Governance Culture of Business. 


Brook Horowitz is a governance and business integrity professional with over 20 years of experience in international business and non-profit sector. His experience includes senior advisory and trustee positions for organisations such as UNDP, FCDO, the G20, and B20, and he has advised many governments on legal and policy reform. Brook is the Executive Director of Culture of Business.