Osborne Clarke | Rachel Couter | Capucine de Hennin

Financial services have an idea of regulators’ pre-election approach to AI but will want clarity under the new government

The Financial Conduct Authority (FCA) and the Bank of England (BoE), together with the Prudential Regulation Authority (PRA), a month before the UK general election was called, delivered an outline of how they would take a principles-based approach to AI for the UK financial sector.

The regulators’ strategic approach was laid out in their required response to last year’s government white paper on AI. The response, however, published in April contained little new substantive content and lacked specifics. The regulatory bodies were comfortable overall with their existing rules and regulatory framework, which they viewed as sufficient to implement five overarching principles set out in the AI white paper.

What next under Labour?

With the general election and new government in the UK, a further complication of the picture for the financial services sector is how the newly elected Labour government will approach AI and its regulation, which may well differ from its predecessor’s policies. The Labour Party prior to the general election had confirmed that it would introduce an AI strategy and establish a government body to ensure tech regulation is clear and consistent.

Labour’s manifesto published in June more specifically pledged the creation of a new Regulatory Innovation Office to “help regulators update regulation, speed up approval timelines and coordinate issues that span existing boundaries“. It also promised binding regulation “on the handful of companies developing the most powerful AI models“.

Meanwhile, at the end of May, the House of Commons’ science, innovation and technology select committee, in the last report published by the previous Parliament, delivered the findings of its inquiry into the governance of AI. The report called for a and said that the committee’s conclusions and recommendations “apply to whoever” is in government after the UK general election.

The House of Commons’ committee said it supported the current sectoral approach to regulation but that the next government should be ready to introduce new AI-specific legislation if it “encounters gaps in the powers of any of the regulators to deal with the public interest in this fast developing field“. In the meantime, the King’s Speech at the state opening of Parliament on 17 July will set out the new government’s legislative plans for a new session – when any initial proposals should become clear for AI and financial services.

AI white paper and regulatory guidance

AI regulation in the 2023 white paper centres on the implementation of five cross-sector, high-level principles to guide regulators, with a view to a decentralised approach to AI governance. This was confirmed by the consultation response, which said there would be no new AI legislation for the UK.

In February this year, the now former Conservative government published initial guidance for regulators on implementing the UK’s AI regulatory principles. This set out the main considerations for regulators when developing tools and guidance for implementation in their regulated sectors, including potential key questions for regulators to ask themselves.

The guidance was “not intended to duplicate, replace or contradict regulators’ initiatives or existing statutory definitions of principles“, and was published as the first part of a three-phased governmental approach to issuing regulatory guidance.

The five principles for AI systems cover safety, security and robustness throughout the lifecycle; transparency and explainability of use and decision making processes; fairness, so as not to undermine legal rights of individuals or organisations, discriminate unfairly or create unfair market outcomes; accountability and governance for effective oversight of the supply and use of AI systems; and contestability on AI decisions and redress on outcomes that are harmful or create material risks of harms.

Overarching considerations

The former government’s guidance for regulators set out overarching considerations for AI principles. It looked for transparency and public information on AI-related actions, opportunities for collaboration and knowledge exchange, consultation on other regulators’ guidance, work towards a coherent definition of what the AI principles mean and how they should be interpreted, an understanding of how these principles are interpreted and acted on by organisations, and tools and guidance for AI developers and deployers to understand how technical standards could help compliance with the principles.

The regulators responded to the former government’s request for an outline of their strategic approach to AI in line with the white paper’s expectations. They were asked to comment on how AI applies within their responsibilities, including an explanation of enabling legislation and its relevance to AI, as well as for examples of steps taken to adopt the AI principles, a summary of guidance on how the principles interact with existing legislation and comment on work undertaken on AI risks and approaches to tackle these emerging risks.

They were also asked to comment on steps taken to collaborate on AI-related issues that cut across regulatory remits and the interaction and overlap of responsibilities between regulators; to explain their capability and resources and what was needed to regulate AI effectively; and a view of plans over the coming 12 months, such as risk assessments, tools and guidance and stakeholder engagement.

FCA’s approach to AI

In April, the FCA in its AI Update acknowledged the rapid advancements in AI technology and the potential for it to transform the financial services sector. The UK financial regulator emphasised the importance of a safe and responsible adoption of AI, while also promoting innovation and competition.

The FCA held that its existing regulation was largely sufficient for AI regulation in line with the five principles. It emphasised, with respect to the safety, security and robustness principle, that there are a number of applicable “high-level principle-based rules“, such as principle 2 (skill, care and diligence) and principle 3 (management and control) of its principles for business as well as the threshold conditions.

Alongside these overarching requirements, the FCA gave examples of some of its more specific regulations. The senior management arrangements, systems and controls (SYSC) rule 4.1, which requires firms to have sound security mechanisms in place for data and business continuity. The SYSC 15A (operational resilience) requirements aim to ensure that firms are able to respond to, recover, learn from and prevent future operational disruptions. Also, the senior managers and certification regime, which emphasises senior management accountability and is relevant to the safe and responsible use of AI.

One area where the FCA did identify a gap was in relation to the appropriate transparency and explainability principle, noting that its current framework does not specifically address this issue. However, the regulator highlighted that the provision of clear and fair information to consumers is a key priority in its governance and is covered by its principles for business, including the Consumer Duty, and the requirements related to consumer understanding and communication.

Overall, the FCA seem confident in their current approach to regulation and its transferability to AI governance. Garnering an understanding of the use of AI in UK financial markets appears to be a top priority for the FCA over the coming year, and they have outlined that they plan to explore new regulatory engagement models and testing environments, such as the Digital Sandbox, to facilitate the safe and responsible testing and assessment of AI technologies via synthetic data. The FCA has stated that the Digital Sandbox, as well as the Regulatory Sandbox and TechSprints, will actively support beneficial innovation in the context of AI.

The BoE and PRA

letter published jointly by the BoE and PRA in April noted that the UK financial sector is adopting the use of AI in a range of ways, and specifically refers to the BoE’s new secondary objective of facilitating innovation in the provision of financial market infrastructure services.

The objective of the BoE is to maintain monetary and financial stability in the UK, with the PRA’s being to promote the safety and soundness of regulated firms and contribute to securing an appropriate degree of protection for insurance policyholders.

The BoE and PRA outlined in the letter how their current regulatory frameworks, like the FCA, are generally sufficient to govern AI. For example, with respect to the appropriate transparency and explainability principle, though the model risk management (MRM) principles for banks are not AI specific, they put forward explainability and transparency as factors to be considered when assessing the complexity of a model. They demand increased oversight by banks as the complexity of a model increases. They also say the MRM principles are relevant to the accountability and governance principle.

However, both bodies indicated that parts of the regulatory framework, in their view, could benefit from clarification, particularly in relation to data management, MRM, governance and operational resilience, including third-party risk.

The letter noted that an AI taskforce had recently been established, allowing the BoE to safely make progress using AI. The taskforce’s aims are to look for cases of AI use and run assessment pilots, develop appropriate guardrails to ensure that risks from using AI are controlled (including an ethical framework for using AI models responsibly), identify training needs, and to grow a culture where these models are understood and can be used effectively.

Osborne Clarke comment

While it is useful to understand the regulators’ view of how AI falls within their regulatory frameworks, the responses published are very broad and leave it to regulated firms to assess precisely how AI should be developed and deployed at a practical level while complying with their regulatory obligations.

Crucially, despite being specifically asked to do so, they have not mentioned any specific tools or guidance for the industry they are planning or preparing, nor have they specifically responded to any of the questions suggested by the former UK government in its initial guidance for regulators on the implementation of the AI principles.

In January, Labour said in its plan for the financial services sector, Financing Growth, that the success of the financial services sector would be crucial for its plans for UK economic growth. It said the development of AI was critical for the future of the sector in the UK and “an agile approach to regulation” would be needed for financial service firms and organisations to use the technology to “boost growth in every part of the economy“. Whether or not new legislation specifically concerning AI and financial services will be proposed in the King’s Speech on 17 July, it cannot be ruled out in future nor, in the meantime, further activity around its regulation for financial services.

An earlier version of this Insight was originally published as an article by IFLR.

This article first appeared on Lexology. You can find the original version here.