The EU AI Act: A New Approach to Regulating Artificial Intelligence

The European Union (EU) has been working for years to establish a risk-based rulebook for artificial intelligence (AI), known as the EU AI Act. This regulation, which is designed to create a safer and more trustworthy AI ecosystem, will become a major point of focus in the coming months and years as companies start meeting key compliance deadlines.

EU AI ACT

What Is the EU AI Act Trying to Achieve?

The EU’s primary goal with the AI Act is to encourage the adoption of AI while ensuring that its use is ethical and respects human rights. The regulation is designed to balance fostering innovation and AI growth with protecting citizens from potential risks associated with the technology. The EU wants to ensure that AI remains “human-centered” and that businesses have clear guidelines to safely implement AI technologies.

Why Is the EU Focusing on AI Regulation?

AI technology has the potential to revolutionize industries, improve productivity, and enhance everyday life. However, if AI systems are poorly designed or misused, they could cause significant harm, particularly when it comes to personal rights and freedoms. The EU’s goal is to set conditions that minimize these risks while building trust among citizens, making them more likely to accept and use AI technologies.

How Does the EU AI Act Work?

The AI Act classifies AI applications based on the level of risk they pose. The regulation follows a tiered, risk-based approach, dividing AI uses into categories:

  1. Unacceptable Risk (Banned Uses): Certain AI applications are deemed so risky that they are outright banned. These include AI systems that use manipulative or harmful techniques, such as social scoring or surveillance technologies in public spaces. However, there are exceptions to some of these bans (e.g., law enforcement using real-time biometric identification for specific crimes).
  2. High-Risk Applications: These include AI used in critical areas such as healthcare, education, law enforcement, and transportation. For these applications, developers must conduct risk assessments, ensure high standards of transparency, and put in place systems to manage and mitigate potential risks. High-risk systems used by public authorities must also be registered in an EU database.
  3. Medium-Risk Applications: These AI systems, such as chatbots or tools used to create synthetic media, require transparency. For example, users must be informed when they are interacting with AI-generated content.
  4. Low-Risk Applications: These AI systems, like recommendation engines used by social media platforms, are not regulated under the Act but are encouraged to follow best practices to improve trust.

Regulation of General-Purpose AI (GPAI)

A special focus of the AI Act is on General Purpose AI (GPAI) models, which form the backbone of many AI applications. These models are widely used and have the potential to influence a broad range of applications. Given their influence, the Act applies additional rules to developers of these systems, particularly in regard to transparency and risk management.

In light of the explosion of Generative AI tools like ChatGPT, the EU updated the AI Act to address the unique challenges posed by these technologies. Some tech firms, including OpenAI, lobbied for lighter regulations on GPAIs, arguing that stringent rules could hamper European AI development. As a result of such lobbying, the final law includes certain carve-outs for open-source models and R&D efforts.

Key Compliance Deadlines

The EU AI Act officially entered into force on August 1, 2024. However, different components of the regulation will be enforced at different times. Some of the key compliance deadlines include:

  • Six months after entry into force: Prohibited AI use cases come into effect.
  • Nine months after entry into force: Codes of Practice are established for AI developers.
  • 12 months after entry into force: Transparency and governance obligations kick in.
  • 24 to 36 months after entry into force: High-risk AI systems must comply with more detailed regulations.

This staggered approach gives companies time to adjust to the new rules and provides regulators time to develop clear guidelines for enforcement.

How Are Violations Enforced?

The AI Act includes significant penalties for non-compliance. Penalties can reach up to 7% of a company’s global turnover or €35 million for violations related to banned uses. For other violations, fines can go up to 3% of global turnover or €1.5 million for incorrect information provided to regulators. These fines are intended to ensure companies take the regulation seriously and implement the necessary safeguards.

What’s Next?

While the EU AI Act has been hailed as the first of its kind, the full picture of its implementation is still being shaped. As AI technologies evolve, so too will the regulation. The EU will likely continue to refine the law and offer guidance to companies about how to comply as the landscape of AI technology develops further.

In the years ahead, the EU AI Act will be a key part of shaping the future of AI in Europe, helping to ensure that innovation and public trust go hand in hand.