As of February 2, 2025, the first provisions of the EU Artificial Intelligence (AI) Act have come into effect, marking a significant step in regulating AI within the European Union. These initial measures focus on AI literacy and banning specific AI practices deemed to pose unacceptable risks.

Key Provisions Now in Effect

AI Literacy Requirement (Article 4 of the EU AI Act)

The AI Act mandates that providers and deployers of AI systems take appropriate steps to ensure their personnel possess sufficient AI literacy. This requirement applies to all AI systems, regardless of their risk level, and aims to equip professionals with the necessary knowledge and understanding to responsibly operate AI technologies.

Key considerations include:

  • Training staff on AI deployment, risks, and ethical considerations.
  • Incorporating AI literacy into existing corporate training programs, such as privacy and cybersecurity.
  • Extending AI literacy efforts beyond internal teams to engage with vendors, clients, and affected users.

While no specific penalties have been outlined for non-compliance with the AI literacy requirement, regulators may consider violations when determining penalties for other breaches of the AI Act.

Ban on Prohibited AI Practices (Article 5 of the EU AI Act)

The AI Act identifies certain AI applications as posing an “unacceptable risk” and has banned their use within the EU. These include:

  • Manipulative AI Systems: AI that uses subliminal or deceptive techniques to distort human behavior and impair decision-making.
  • Exploitation of Vulnerabilities: AI systems designed to take advantage of individuals’ age, disability, or socio-economic status.
  • Social Scoring: Systems that assess individuals based on behavioral or personality traits to influence their opportunities or treatment.
  • Facial Recognition Databases: The untargeted collection of facial images from public sources or surveillance footage.
  • Emotion Recognition: AI that infers emotions in workplaces or educational settings (except for medical or safety purposes).
  • Biometric Categorization: AI systems that infer sensitive attributes, such as race, political beliefs, or sexual orientation, based on biometric data.
  • Real-Time Biometric Identification: The use of AI for biometric surveillance in publicly accessible spaces for law enforcement, with limited exceptions.

Although the penalties for violating Article 5 will only take effect on August 2, 2025, non-compliance could result in significant fines—up to €35 million or 7% of global annual turnover.

Industry Implications and Compliance Measures

The implementation of these AI Act provisions signals a shift in how AI is developed and used within the EU. Organizations leveraging AI technologies must act swiftly to assess their AI systems, update compliance frameworks, and implement AI literacy programs.

Companies providing general-purpose AI platforms must take extra precautions to prevent misuse by end-users. Compliance strategies may include:

  • Implementing internal codes of conduct for AI deployment.
  • Updating contracts to explicitly ban prohibited AI practices.
  • Engaging with regulators to ensure adherence to AI Act guidelines.

Looking Ahead

The European Commission is expected to release further guidelines on AI literacy and prohibited AI practices, providing additional clarity to businesses and developers. The AI Act will continue rolling out in phases, with full enforcement expected by August 2, 2026.

For organizations operating within the EU, understanding and implementing these initial AI Act requirements is critical. Companies should prioritize compliance efforts to mitigate risks and align with evolving regulatory expectations.


Stay updated with the latest developments in AI regulations and compliance to ensure your organization remains ahead of the curve. For more information, visit the European Commission’s AI Act resource hub.