A mid-sized European fintech clears every automated compliance check. Transactions pass, alerts stay low, dashboards glow green. Months later, regulators intervene over missed suspicious activity patterns that the system never flagged. This is not in the least bit unusual. In 2023, several banks, as reported by the European Banking Authority (EBA) faced scrutiny despite sophisticated monitoring tools, exposing gaps between automated outputs and real-world risk.
The paradox is clear. More automation does not automatically guarantee more safety. Compliance technology is advancing rapidly across the EU and UK, yet governance is struggling to keep pace amid diverging regulatory approaches. In many ways, the danger is no longer non-compliance, but rather compliant systems making the wrong decisions at scale.
The New Compliance Landscape
Compliance has now moved far beyond manual box-ticking. First came rigid rules-based systems, then smarter platforms that learn from data. Today, many firms rely on AI-assisted tools to make or recommend decisions in real time. A UK challenger bank, for example, may now use automated transaction monitoring to review millions of payments daily, something no human team could ever manage.
This shift is happening under intense regulatory pressure. The EU AI Act promotes a risk-based model for AI use, while the UK favours a more flexible, principles-led approach, as outlined by Financial Conduct Authority (FCA) guidance.
Faced with seemingly ever-expanding rules, firms are automating to keep up. The result is a quiet but profound change. Compliance is no longer interpreted by people alone but increasingly executed by software logic. The tension is obvious. Speed and efficiency improve, yet in this environment accountability and understanding become harder to pin down.
Who Owns the Failure?
When an automated compliance system fails, responsibility rarely sits in one place. A vendor may supply the technology, an internal team configures it, and senior leaders approve its use. Yet when something goes wrong, each can point elsewhere. This diffusion has led to what some call “accountability laundering”, where risk is quietly shifted through procurement decisions.
Recent enforcement actions underline the point. The FCA has made clear that firms cannot outsource responsibility, even when using third-party RegTech. Similarly, the EBA stresses that accountability must remain within the institution.
Boards often sign off on systems they do not fully understand, assuming technical sophistication equals control. It does not. Automation reshapes liability rather than removing it. If no one can clearly explain how a decision was made, regulators will assume there to be collective responsibility. In practice, that means everyone owns the risk.
The Loss of Human Judgment
The loss of human judgement emerges when compliance systems rely on binary logic that cannot interpret context, intent or cultural nuance. Automated anti-money laundering alerts, for instance, often flag legitimate transactions simply because they deviate from expected patterns, leading to unnecessary investigations. Similarly, ESG classification tools may mislabel firms due to rigid scoring models that ignore sector-specific realities. In HR compliance, algorithmic screening can wrongly flag employee behaviour without recognising intent or context. This over-reliance creates a dangerous sense of false certainty, as systems optimise for consistency rather than correctness.
Regulators across the UK and EU increasingly emphasise proportionality and reasonableness, concepts which machines struggle to apply. Such errors carry financial and reputational costs, particularly in cross-border operations where cultural expectations differ and rigid rules can misinterpret normal business practices. Humans may introduce inconsistency, but they also provide essential judgement. Compliance without judgement becomes enforcement without understanding.
The Black Box Problem
Modern compliance tools often operate as opaque “black boxes”, driven by AI models, proprietary vendor logic, and layers of automated rules that few inside the business fully understand. A bank using transaction monitoring software, for example, may flag suspicious activity without being able to explain precisely why a customer was escalated, exposing the firm to challenge under regulatory scrutiny.
Explainability is no longer a technical luxury but an emerging regulatory expectation. Across the UK and EU, there is a clear shift towards auditability and transparency, particularly under frameworks such as the EU AI Act and FCA guidance on operational resilience. Yet many firms remain dependent on systems they cannot interrogate or override.
The real danger is cultural as much as technical. Teams begin to rubber-stamp outputs, assuming the system must be correct. Transparency underpins not only regulatory compliance but internal governance and trust. If your compliance system can’t be interrogated, it can’t be trusted
Scaling Flawed Rules
Automation does not simply apply compliance rules, it multiplies them at scale. A minor flaw in logic, once embedded in a system, can be repeated thousands of times a day. Consider risk scoring tools used in financial services. If historical data contains bias, such as disproportionately flagging certain nationalities, automation can reinforce and expand that bias across entire customer bases.
Legacy policies present a similar risk. Rules originally written for human judgement are often translated directly into code, without nuance. In anti-money laundering systems, outdated thresholds can trigger excessive false positives, overwhelming teams and obscuring genuine threats.
Regulators in the UK and EU are increasingly focused on fairness and algorithmic accountability. Small design errors are no longer isolated mistakes, they become systemic failures, as rather than creating bias, automation industrialises it.
Rebuilding Oversight
Effective oversight today is not about slowing automation, but governing it intelligently. Leading firms are embedding genuine human-in-the-loop controls, where specialists actively review edge cases rather than merely endorsing system outputs. For example, several UK banks now operate tiered escalation models in transaction monitoring, ensuring unusual or ambiguous cases are routed to experienced analysts instead of being auto-resolved.
Regular “challenge sessions” are also emerging as best practice. Here, compliance, legal and data teams interrogate automated decisions, testing assumptions and identifying drift. This reflects a shift towards treating compliance systems as living products that require continuous refinement, rather than static tools deployed once and left unchecked.
Cross-functional accountability is critical. Under evolving UK and EU expectations, including continuous assurance approaches highlighted by the FCA and the European Commission, boards are expected to understand and challenge automated processes. Oversight must evolve alongside automation. The goal is not less automation, but better-governed automation that earns trust internally and externally.
The Real Risk Isn’t Automation—It’s Abdication
The real danger in modern compliance is not automation itself, but the quiet abdication of responsibility that can accompany it. Technology can enhance consistency and scale, yet when leaders treat systems as infallible, oversight erodes. Compliance cannot be set and forgotten. It demands continuous interrogation, adaptation and accountability.
Regulators are already signalling a shift. In the UK and EU, enforcement is increasingly focused on decision-making processes, governance structures and the ability to explain outcomes, not just the outcomes themselves. Firms that cannot demonstrate control over their systems risk both regulatory and reputational damage.
The message for leadership is clear. Automation must be matched with active oversight. In the age of automated compliance, oversight is no longer optional—it is the only thing that makes compliance real.
And what about you…?
- Do we have clear accountability across teams for overseeing compliance technology, or are responsibilities fragmented and unclear?
- If a regulator asked us to justify a key automated decision today, would we be able to demonstrate both transparency and control?


