
This article won the prize awarded by the team of editors at Compliance Corylated as part of the 2025 Risky Women Writing Competition, presented in partnership with Ocorian. It was originally published here.
What if regulators could peek into financial firms’ systems without all the reporting? Are we ready for this blend of oversight and trust?
In 2019 the Bank for International Settlements (BIS) floated the concept of ‘embedded supervision’. The idea was for supervisors to have direct access into financial services firms’ systems (specifically immutable distributed ledgers) enabling supervision and oversight to happen without separate reporting by the firm. At the time, embedded supervision was seen as a possible way of applying technology-neutral regulation to the supervisory challenges of decentralized finance and cryptos with compliance being automatically monitored by reading the market’s ledger. The ‘carrot’ to firms was the reduced, or even removed, need to actively collect, verify and submit data to relevant regulators. At that point, embedded supervision was an interesting idea and a worthwhile goal, but there were multiple barriers to practical execution. Among other things, the challenges included the maturity of the technology and the need to build the trust required for firms to permit regulatory tools any potentially direct access into their systems.
Since 2019, technological advances and artificial intelligence (AI) have revolutionized financial services. Part of that revolution is the pivotal importance of data and its governance. Data is seen as the new oil and while management information has always been important it is now, more than ever, the lifeblood of firms bringing both challenges and opportunities. The same is true for regulators who, like firms, are seeking to reap the rewards of robust data governance.
Data governance
Data governance done robustly well is an emerging core competency for firms. Data governance is a firm’s approach to the management of its data collection, processing, accuracy, retrieval, consistency and security. It encompasses who has responsibility for data management throughout the data lifecycle and what processes, policies, standards, methodologies and metrics are used for ensuring effective data management and the security protocols used to keep data confidential. It also covers what data a firm has, on what basis and where (and how) it is stored. Robust data governance enables a firm to treat its data as an asset rather than just something it is a custodian for.
A data governance framework is not a one-size-fits-all and can come in many forms but it must cover the whole data lifecycle – creation, use, communication, (unmodified, original format) retention, retrieval and destruction. It requires a detailed understanding of what data the firm needs to operate, what data it currently has stored in its systems and files, how data is processed, the basis on which certain data is collected and used, and where the data is stored.
An inherent benefit of business-wide data governance is clear line of sight to the data needed to fulfil external reporting obligations.
Emerging uses of AI
AI-enabled solutions are an obvious candidate to facilitate external regulatory reporting. The output of AI is critically dependent on the quality of the data inputs. In theory, a firm could deploy an AI solution to allow regulatory access to the specific systems or databases required to fulfil external reporting requirements. That said, there would need to be checks, balances and humans-in-the-loop. June 2025 research by Anthropic concluded that “This research also shows why developers and users of AI applications should be aware of the risks of giving models both large amounts of information and also the power to take important, unmonitored actions in the real world.” In other words, as it currently stands, it would be unduly risky to allow AI models to ‘speak’ directly to regulators without human oversight.
The same risks hold true for regulators who are not only using AI themselves but also needing an insight into how AI is being used in firms. Indeed, BIS’ Project Noor is an initiative that seeks to equip financial supervisors with independent, practical tools to evaluate and interpret the inner workings of AI models used by banks and other financial institutions. By combining explainable AI methods with risk analytics, the project aims to deliver a prototype through which supervisors can verify model transparency, assess fairness, and test robustness.
Embedded supervision
Bringing together the next generation of data governance and AI-enabled tools could enable embedded supervision to become a practical reality with all the potential benefits of low-cost compliance and equally low-cost supervision. Given the potential benefits, embedded supervision is a worthwhile goal, but trust would need to be built through extensive testing, coordination and feedback loops between regulators and those they regulate.
The question of trust, particularly when it relates to output of AI models, was highlighted in the May 2025 paper “Chat Bankman-Fried? An exploration of LLM alignment in finance” in which the Bank of Italy examined the ethical alignment of large language models (LLMs) in financial decision-making scenarios. The results were mixed and made clear that human analysis of outputs is still necessary – not only because, say, prompt engineering does not always yield the expected results, but also because models that present as satisfactorily aligned on average may still make the occasional wayward decision.
Embedded supervision was a concept that started life as a means of supervising decentralised finance. It now has the potential to usher in the next era of financial services supervision. It will take time to build both the trust and the tools required but the potentially substantially reduced cost burden for both regulator and regulated make it a worth investigating further.
From the Judges
“A strong technical piece, well written, clear perspective.”
“Liked how this took a BIG idea and examined the opportunities and complexities of embedded supervision becoming reality.”
“This article powerfully argues the case for embedded supervision as the next era of financial oversight. The piece is highly original, effectively synthesizing a past concept with current AI/data governance realities. It demonstrates masterful understanding, using credible, contemporary research (Anthropic, Bank of Italy) to support a compelling and logical argument.”



