The End of “Trust Us, It Works”

A risk committee chair looks up from a credit scoring report and asks a deceptively simple question: “Why did the model do that?” In the past decade firms could rely on claims of accuracy, speed and scale to justify their AI-driven workflows. Today that is no longer enough. Across the European Union and the UK regulators are making clear that firms must be able to explain how their AI systems arrive at decisions and demonstrate robust governance around them, not just produce performance metrics. This is especially the case where models influence high-stakes outcomes such as lending, pricing or compliance enforcement. In both jurisdictions the regulatory focus is shifting from ethical niceties to concrete expectations about accountability and traceability. For regulated firms, explainability has become less of a technical luxury and more of a licence to operate in a world where “trust us, it works” will not satisfy auditors, supervisors or customers, and where failure to explain could carry real consequences.

Regulation Has Changed the Question Being Asked

In today’s regulatory landscape the focus has shifted from whether an AI system achieves certain outputs or key performance indicators to understanding how and why it makes decisions. Under the EU’s Artificial Intelligence Act, high-risk systems such as credit scoring or fraud detection must be transparent and traceable so that regulators, auditors and affected parties can see the logic behind decisions rather than just the end result. This is an explicit move beyond simple performance reporting.

In the UK the Financial Conduct Authority (FCA) and the Prudential Regulation Authority (PRA) are steering firms towards deeper understanding of their own AI models, embedding explainability and governance expectations across existing frameworks. Rather than adopting a prescriptive rulebook, UK regulators emphasise principles such as appropriate transparency and accountability, meaning firms must narratedecision logic and risks in terms supervisors can engage with.

This trend reflects a growing “regulatory curiosity”, where explainable decision narratives matter as much as the numerical performance figures once cherished by data science teams.

From Accuracy to Accountability

For years data science teams pursued marginal gains in accuracy, assuming the most complex model was also the most valuable. That logic is now being questioned. A credit model that is slightly more accurate but cannot explain why it declined a self-employed applicant is no longer “better” in any practical sense. Regulators, ombudsmen and courts increasingly expect firms to justify individual outcomes, not hide behind aggregate statistics.

This is where accountable performance comes into play. In consumer lending, firms must explain adverse credit decisions clearly to customers, a requirement reinforced by guidance on model risk and explainability from the European Banking Authority (EBA). In insurance, explainable pricing models are becoming essential when firms are challenged on differential pricing or potential bias, particularly in retail products. Claims handling offers another example. Automated triage tools may speed up settlements, but without transparent reasoning they can fuel disputes rather than reduce them. Even in fraud detection, banks report that interpretable models support quicker internal escalation and more constructive supervisory conversations, echoing recent discussion papers from the FCA on AI and data use.

The trade-off is no longer technical but strategic. Explainability is not a bolt-on for regulators. It is a design choice that determines whether an AI system can be defended, trusted and allowed to operate at scale.

You Can’t Audit a Mystery

In many regulated firms, the collision between AI and governance happens after deployment. Internal audit teams arrive with checklists, only to find models that adapt, retrain and evolve faster than documentation can keep up. A fraud detection model that shifts its thresholds weekly may perform well, but when auditors ask why a transaction was blocked six months ago, “the model has moved on” is not an acceptable answer.

This is where opaque AI strains model risk management and operational resilience. Traditional model documentation, written once and approved, works for static systems. It fails for machine learning models that change behaviour as data changes, a challenge highlighted by supervisors following UK banking incidents linked to poorly understood automated controls.

Leading firms are responding with new practices. Continuous explainability tools generate explanations alongside decisions, not months later. Decision replay allows firms to reconstruct what a model would have decided at a specific point in time. Some are storing explanation logs next to transaction logs, so audit trails show not just what happened, but why.

The deeper issue is structural. Governance frameworks are static, while AI systems are dynamic. Explainability is infrastructure for ongoing assurance, rather than model sign-off.

AI That Can Defend Itself

AI decisions now sit at the intersection of regulatory exposure, reputational risk and legal challenge. A rejected loan, a cancelled insurance policy or an automated fraud block can move rapidly from a customer complaint to supervisory scrutiny or a headline. Yet boards are increasingly asked to approve these systems without a clear grasp of how decisions are reached in practice.

This is where explainability becomes critical. Firms are recognising that AI systems must justify their decisions under stress. During regulatory investigations, litigation or media attention, the central question is not whether a model performed well on average, but whether its reasoning can be explained consistently and retrospectively. UK regulators have reinforced this expectation through commentary on management accountability and AI governance, while EU supervisory bodies connect explainability with liability, redress and consumer protection.

Some banks now pressure test models by replaying disputed decisions months later to see whether explanations still stand up. Insurers are embedding explanation outputs into complaints handling to defend decisions. For boards, the implication is simple…. explainable AI is a governance control. If your AI cannot explain itself, someone else will.

What Leading Firms Are Doing Differently Now

Leading firms are no longer treating explainability as something to fix once regulators start asking questions. Instead, they are designing for explanation from the outset. Data science teams are working alongside risk, legal and compliance colleagues before models are built, not after they are deployed. Choices about data, features and model complexity are made with future scrutiny in mind, rather than purely predictive performance.

In practice, this means automated credit decisions are designed with clear customer explanations already agreed, not drafted defensively after complaints arise. In insurance, pricing teams are aligning closely with conduct risk functions so that differential pricing can be explained consistently and fairly. Banks are also embedding explanations into customer journeys, making them part of everyday service rather than a last-resort response to disputes.

The biggest shift is cultural rather than technical. Explainability is increasingly seen as a foundation for trust, resilience and long-term growth. As regulators compare firms more closely, explainability maturity is emerging as a competitive differentiator. Firms that can explain decisions clearly face fewer challenges, resolve issues faster, and avoid repeated regulatory friction.

Explainability as a Strategic Signal

Explainable AI is not about putting the brakes on innovation. It is about making innovation durable. As regulators sharpen expectations around accountability and model risk, guidance from the UK Information Commissioner’s Office and the EU AI Act points the same way. Explainability signals control to supervisors, credibility to customers, and assurance to boards. It shows a firm can defend decisions when scrutiny rises. In the next phase of AI adoption, the firms that win won’t just have smarter models, they’ll have models that can explain themselves under pressure.

And what about you…?

  • Do your board and senior leaders genuinely understand where AI is being used, or are they relying on trust and assumptions?
  • Where might a lack of explainability expose you to regulatory, reputational, or legal risk over the next two years?