Macfarlanes LLP | Martha Campbell | Alishea Patel
The European Union is at the forefront of regulating artificial intelligence (AI), with the new AI Act (the Act) being one of the first significant legislative initiatives addressing AI globally. The Act entered into force on 1 August 2024, with the majority of its rules becoming effective from 2 August 2026. It introduces a risk-based approach to the regulation of AI, classifying different systems of AI according to the risk they pose to users. The focus is on AI systems that present a “high” risk.
“By guaranteeing the safety and fundamental rights of people and businesses, the Act will support human-centric, transparent and responsible development, deployment and take-up of AI in the EU”. Statement by President von der Leyen on the AI Act.
The Act’s extraterritorial reach is particularly noteworthy, affecting not only AI developers and users within the EU but also international organisations offering AI systems in the EU, or whose AI systems affect individuals within the EU.
Risk-based rules: a quick guide
The Act imposes compliance obligations on providers, deployers, importers, distributors and product manufacturers of AI systems. In a nutshell, these are:
The Act also contains additional obligations for general purpose AI systems (GPAI) – an AI system, often trained on a significant amount of data using self-supervision, that is capable of performing a wide range of distinct tasks. In recognition of the potentially high risks inherent in such systems, the Act imposes stricter criteria for these systems, as well as an overarching requirement for “transparency” (i.e. for the individuals concerned to know they are interacting with an AI system) and “explainability” (i.e. the provision of a clear explanation of the model’s decision-making processes).
These concepts are by no means new to the AI discussion, having featured in the European Commission’s 2020 White Paper on Artificial Intelligence, which recommends a requirement for “clear information to be provided as to the AI system’s capabilities and limitations, in particular the purpose for which the systems are intended, the conditions under which they can be expected to function as intended and the expected level of accuracy in achieving the specific purpose”.
Potential penalties under the Act are hefty, and could result in a fine of up to €35m or 7% of global turnover, depending on the nature of the non-compliance.
Get AI-ready: a proactive blueprint
In preparation, organisations with a link to the EU market should take proactive steps to ensure that AI is developed and deployed responsibly and legally. This includes:
- mapping AI systems: conducting a mapping exercise of all AI systems currently in use or in development to determine whether they would be captured by the Act and, if so, identifying their risk category and applicable compliance obligations;
- implementing compliance measures: implementing policies, procedures and training to ensure that only compliant systems are used and provided. In particular, developing and implementing a robust compliance strategy for “high” risk AI systems and GPIA, which includes appropriate data governance, accuracy, cybersecurity, transparency and explainability measures;
- monitoring developments: staying informed about any developments to the AI Act, as well as the approach to AI regulation globally;
- data governance: reviewing and strengthening data governance policies, as the Act emphasises the quality of data sets used by AI, which must be free from biases and be compliant with data protection and copyright legislation; and
- documentation and record-keeping: maintaining thorough documentation and records of AI system assessments, compliance measures, and data sources to demonstrate adherence to the Act’s requirements.
This article first appeared on Lexology. You can find the original version here.