Ashurst | Fiona Ghosh | Matthew Worsfold
Sunday 2 February marks a significant step in the ongoing story of AI regulation when the first key provisions take effect. From this date, marketing or using AI systems which are seen as posing an unacceptable risk will be prohibited in the EU. Prohibited systems include those which use subliminal, manipulative, deceptive or exploitative techniques; systems for “social scoring”; anything which profiles individuals to predict their risk of committing a crime; systems which create or expand facial recognition databases; and systems which infer emotions in work or education settings. Biometric categorisation systems which categorise natural persons to deduce or infer certain personal characteristics are also banned – although limited exceptions apply to biometric identification for law enforcement purposes. Breaching this ban may lead to a fine up to the higher of EUR 35,000,000 and 7% of global annual turnover.
In addition to prohibitions on unacceptable risk systems, Sunday 2 February marks the deadline for organisations using AI systems of any kind to ensure they are meeting the Act’s AI literacy requirements. This mandates that organisations have taken steps to ensure the adequate training and upskilling of individuals and teams that are operating AI systems. If they haven’t already, businesses should ensure that AI training and learning programmes have been adequately designed and rolled out to those using AI, with effective monitoring in place to confirm its completion. For organisations that have recently rolled out generative AI-backed platforms, this is likely to have a far reaching scope.
Currently, despite industry comment, many open questions remain about the interpretation of the AI Act. Although the Commission has promised guidelines to help providers with compliance by early 2025, and has consulted on the topic, nothing has yet been published, and of course no case law exists on this novel topic. Meanwhile, compliance must be seen as a continuing task. Pre-2 February, businesses should review any existing inventory of their AI systems and their current AI use policy to identify any last-minute compliance gaps and stop the use of any prohibited systems. They should also ensure they have a plan in place for ongoing AI literacy of staff.
As the rest of the AI Act comes into effect in stages, the regulatory framework in Europe will change significantly. This comes at a time when other jurisdictions, in particular the US (and to a lesser extent the UK) are signalling a move away from regulation, which may well accelerate in response to competition from elsewhere. The extent of the resulting divergence remains to be seen.
Businesses should therefore use this deadline as a prompt to start working through a plan for compliance. With approximately 18 months until the more substantive deadline for the rest of the Act, businesses need to start commencing at a minimum their discovery and cataloguing exercises to identify potential AI systems and use cases. This exercise will likely take significant amount of time given the challenges that will need to be faced into, for example many organisations have been procuring and/or building AI systems for a number of years, are unlikely to have central registers and are going to need to engage extensively with third parties.
This article was originally published on Lexology and you can access the original version here