CMS | Javier Torre de Silva

European Union

The AI Act is a uniform regulation on AI applicable to all 27 Member States, but it will have to be interpreted, applied and enforced by around 2,000 market surveillance authorities and by 208 fundamental rights protection authorities, each one with a possible different interpretation.

It would be of course easier to have a unique AI regulator and market surveillance authority in the EU, or at least one for each Member State, as mostly happens with data protection authorities (in all EU Member States except in Germany, and for some regional authorities also in Belgium, Spain[1] and Finland). Nevertheless, this is not possible in AI. The reason is that AI is a transversal technology, meaning that it is (or will soon be) present in all sectors and activities. It is no longer possible to regulate the banking sector without regulating the use of AI by banks (this is why the central banks should be—and indeed are—market surveillance authorities for banks using AI: Recital 158 and art. 74.6 for all financial institutions), and the same can be said of all other regulated sectors, as the AI Act itself recognizes: for insurance (Recital 158), for biometrics for law enforcement, migration, justice and democracy, (Recital 159 and art. 74.8), etc. And the same can be said of products: market surveillance authorities for each of the products having harmonized legislation as listed in Annex I of AI Act should be AI market surveillance authorities when those products have AI components. Given the ubiquitous nature of AI, and its intimate entanglement with the essence of products and services, its market surveillance cannot fall under a single authority in all cases.

Of course, there are exceptions to this dispersion of regulators, and the two main ones are the EU bodies and the general-purpose AI models. Indeed, the European Data Protection Supervisor is the only body competent to impose fines on the Union institutions, bodies, offices and agencies falling under the scope of the AI Act (art. 100). And the Commission is the only body competent to impose fines on providers of general-purpose AI models (art. 101), with the help of the AI Office (with powers to monitor and supervise compliance, art. 75, as well as powers of implementation and compliance, art. 89). Sometimes the boundaries of the powers of the AI Office will be difficult to determine: as explained in Recital 161 of the AI Act, “where an AI system is based on a general-purpose AI model and the model and system are provided by the same provider, the supervision should take place at Union level through the AI Office, which should have the powers of a market surveillance authority”. This may create difficult problems as regards agentic AI, when agents are provided by the same provider of the general-purpose AI model: in principle, its surveillance will remain in the AI Office.

In all other cases, the AI Office is expected to help to harmonize enforcement and to provide guidance to national authorities.

Furthermore, some countries may try to centralize part of the surveillance of the compliance with AI Act, as it is the case in Spain, where the existing Draft Bill[2] appoints the Agencia Española de Supervisión de Inteligencia Artificial as the national surveillance authority for most (not all) of the prohibitions of art. 5 of the AI Act, as well as for the high-risk AI systems described in paragraphs 1 to 5 of Annex III of the AI Act, and also for the transparency obligations applicable to all AI systems according to art. 50 AI Act.

In Italy, the recent Legge 23 settembre 2025, n. 132, has appointed the Italian cybersecurity agency (Agenzia per la cybersicurezza nazionale, CAN) as the main national market surveillance authority (art. 20), but there are many others[3].

In Germany, the draft AI Market Surveillance Act appoints the Federal Network Agency (Bundesnetzagentur) as the main market surveillance authority, without prejudice to the existence of many others.

But still, for the rest of the AI systems included in the scope of the AI Act, and for the rest of the obligations different from the transparency ones under art. 50, a vast number of AI national market surveillance authorities exist in each EU Member State. This decentralized approach may be challenging and may result in regulatory fragmentation.

As the provisions of the AI Act regarding governance structure, including art. 70 on national market surveillance authorities, should apply from 2 August 2025 (Recital 179 and art. 113.b, referring to Chapter VII), all EU Member States were obliged to notify to the EU Commission the list of their own national market surveillance authorities.

And they complied. The EU has recently published the list of all notified national market surveillance authorities and they are around 2,000. This number includes data protection authorities, financial institutions regulators, insurance, reinsurance and intermediary regulators, authorities in the field of law enforcement, migration, justice and democracy (including judicial and electoral authorities), as well as the designated market surveillance authorities for all harmonized legislation (Annex I), which includes regulators in 38 different categories of products[4]. The European Commission has published the list of national market surveillance authorities by country[5] and by sector[6].

Furthermore, art. 77 AI Act refers to fundamental rights protection authorities and grant them some powers. The list of fundamental rights protection authorities has also been published[7] and there are 208 of them[8].

As a consequence of the above, there will be more than 2,200 authorities with power to interpret and enforce the AI Act.

And the powers granted to those authorities are far from negligible: all national market surveillance authorities have the same powers granted by Regulation (EU) 2019/1020, including all the activities carried out and measures taken to ensure that the AI systems “comply with the requirements set out” in the AI Act “and to ensure protection of the public interest covered by that legislation”. Art. 10 of the Regulation (EU) 2019/1020 includes “effective market surveillance”, “taking by economic operators of appropriate and proportionate corrective action” and “taking of appropriate and proportionate measures where the economic operator fails to take corrective action”.

The AI Act also provides the market surveillance authorities with additional powers not included in the Regulation (EU) 2019/1020, such as those regarding high-risk systems (testing, notification of incidents, complaints, evaluation, enforcement, etc.), those regarding systems that even if compliant with AI Act may present a risk (art. 82 AI Act), and even the access to source code of high-risk AI systems under certain circumstances (art. 74.13).

The power to be granted access to the source code of the high-risk AI system, attributed to all (approximately) 2,000 EU market surveillance authorities, is subject to the concurrence of certain conditions (necessity to assess the conformity and previous exhaustion or insufficiency of the data and documentation provided), but even with these two conditions, it is probably excessive in most cases. The source code of an AI system may change continuously, it may be an enormous amount of code and it may be too difficult to interpret for most of the European market surveillance authorities. Such a measure would not be proportionate (or even feasible) in almost any cases, except under the most extreme circumstances and total lack of cooperation. Of course, there will be a possibility to appeal against any such measure.

Those powers are granted to all national market surveillance authorities, including, as an example, the Regional State Administrative Agency of Northern Finland (Occupational Safety and Health Division), the Hellenic Recycling Agency (EOAN) in Greece and the Regional Inspectorate for Economic Activities in Madeira (Portugal), of course each one within the limits of their own competences.

Having said that, it is not surprising that the AI Act established mechanisms of Union safeguard, coordination and cooperation.

Those mechanisms will be less necessary in light of the recent approval of Guidelines and Codes of Practice: the Guidelines on prohibited AI practices established by AI Act, the Guidelines on the AI system definition, the Guidelines on the scope of GPAI obligations, the Template for public summaries of training content, the GPAI Code of Practice and (in the field of data protection) the Guidance for Risk Management of Artificial Intelligence Systems of the European Data Protection Supervisor.

But still, the possibility of contradictory interpretations remains.

The internal market should have a level playing field and this may be incompatible with different interpretations of the AI Act by different authorities. Therefore, the AI Act has established a Union safeguard procedure in art. 81, according to which, if a measure has been taken by a market surveillance authority of a Member State and notified to the EU Commission and to the other Member States according to arts. 79.5 and 5 AI Act, then either the European Commission or any other market surveillance authority may raise an objection. In this case, the European Commission will enter into consultation with the market surveillance authority of the relevant Member State and the operator and shall evaluate the national measure and then decide whether the national measure is justified or not. This will hopefully solve most harmonization problems.

In addition to the above, art. 65 of AI Act establishes the European AI Board, composed of one representative per Member State, which according to art. 66 will contribute to the coordination among national competent authorities responsible for the application of the AI Act. One of the two sub-groups within the Board will act as the administrative cooperation group (ADCO) within the meaning of art. 30 of Regulation (EU) 2019/1020. The Board is expected to promote consistency in the AI Act national enforcement.

All these measures will, in the medium term, contribute to the functioning of the internal market as a single market for AI purposes. Nevertheless—and unfortunately—,it will be unavoidable that, at least during the first years of application of the AI Act the existence of around 2,000 different market surveillance authorities, together with the evolving nature of AI technology (continuously creating new regulatory challenges), will result in some unpredictability and in the coexistence of different criteria in the interpretation and application of the AI Act.

To view all formatting for this article (eg, tables, footnotes), please access the original here.

This article first appeared on Lexology | Source