Key Points
- The EU AI Act sets a uniform legal framework for AI across EU countries
- Applies to both local and foreign companies
- Aims to promote human-centric and trustworthy AI
- Ensures protection of health, safety, and fundamental rights
- Adopts a risk-based approach to regulate AI use cases
- Comes with penalties for non-compliance
Introduction to the EU AI Act
The European Union’s Artificial Intelligence Act, known as the EU AI Act, has been described by the European Commission as “the world’s first comprehensive AI law.” The EU AI Act applies to companies both local and foreign, and it can affect both providers and deployers of AI systems.
The EU AI Act exists to make sure there is a uniform legal framework applying to a certain topic across EU countries — the topic this time being AI. With timely regulation, the EU seeks to create a level playing field across the region and foster trust, which could also create opportunities for emerging companies.
Key Provisions of the EU AI Act
The EU AI Act adopted a risk-based approach: banning a handful of “unacceptable risk” use cases; flagging a set of “high-risk” uses calling for tight regulation; and applying lighter obligations to “limited risk” scenarios.
The EU AI Act rollout started on August 1, 2024, but it will only come into force through a series of staggered compliance deadlines. In most cases, it will also apply sooner to new entrants than to companies that already offer AI products and services in the EU.
Guidelines and Penalties
Ahead of the deadline, the EU published guidelines for providers of GPAI models, which include both European companies and non-European players. The EU AI Act comes with penalties that lawmakers wanted to be simultaneously “effective, proportionate and dissuasive” — even for large global players.
Details will be laid down by EU countries, but the regulation sets out the overall spirit — that penalties will vary depending on the deemed risk level — as well as thresholds for each level.
Source: techcrunch.com