The EU AI Act simplified + my honest thoughts
An imperfect but respectable start to keeping AI safe and responsible
Author’s note: This article will be continuously reviewed and revised as the EU AI Act develops.
Last update: 5 July 2024.
Background
On 13 March 2024, the Parliament of the European Union passed the Artificial Intelligence Act (EU AI Act), which is now only a few minor formalities away from entering into law. Once in force, the EU AI Act will represent the first law in the world to broadly regulate AI systems across all sectors based on their risk level.
But will this act support or break the AI industry? In this premium article, I’ll explain the basics of the EU AI Act in very simplified terms, and why I think it’s a respectable but imperfect starting point to keeping AI systems safe and responsible.
How do EU laws work?
The EU is not a country, but a political and economic union of (currently) 27 countries across the European continent since 1993.
Each member state remains independent. However, the EU as a whole can create ‘regulations’ that directly bind member states, or ‘directives’ that must be enacted by member state into their own national law. Member states work together to develop, negotiate and approve EU laws.
Introducing the EU AI Act
On 13 March 2024, after 2 years of back-and-forth negotiations, the EU Parliament finally passed the EU AI Act. It was an overwhelming majority with 523 votes for, 46 votes against and 49 abstained.
On 21 May 2024, the EU Council (i.e. the other wing of the EU government) approved the EU AI Act, which will now proceed to publication in the Official Journal (i.e. the official publication for EU laws) in the next few days. The Act as a whole will come into effect after 20 days, though certain provisions will come into force on a staggered timeline (see below).
The latest official version of the Act is available here.
How will the EU AI Act work?
The EU AI Act will be serve as an ‘one-stop-shop’ law to regulate “AI systems” across the board (not just a particular sector).
The Act will apply both within and outside of the EU. This means non-EU businesses who provide AI systems in the EU will need to comply with the Act. I’ll dive into the extra-territorial application a bit later.
“AI system” is defined as:
a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
I’ll come back to this definition later.
AI systems will be regulated based on their risk level.
“Prohibited” systems will be banned. The Act will prohibit certain use cases, namely (i) manipulative and deceptive practices, (ii) exploitation of vulnerabilities, (iii) biometric categorisation, (iv) social scoring, (v) real-time biometric identification, (vi) risk assessment in criminal offences, (vii) facial recognition databases, and (viii) emotion inference in workplaces and educational institutions. Certain exceptions will apply to real-time biometric identification for law enforcement purposes. These ‘prohibited systems’ provisions will come into force 6 months after the Act comes into force (i.e. by the end of 2024).
“High risk” systems include those that aren’t banned, but pose “significant risk to health, safety or fundamental rights”. For example, AI systems used in critical sectors like health, critical infrastructure, education, employment and law enforcement. Such systems will need to follow strict transparency, data governance, risk management, registration and reporting obligations. These provisions will come into effect 3 years after the Act comes into force.
“Limited risk” systems are those that aren’t so risky but still interact with humans or generate/manipulate content (including deepfakes). Such systems only need to be documented and be transparent to users (e.g. users should be made aware they are interacting with an AI system or that content was AI-generated).
Technically speaking, there is no actual category called “limited risk”. This is more-so an informal reference to the Chapter IV transparency provisions. A particular AI system may at the same time fall under the Chapter III high-risk and Chapter IV transparency provisions. For example, biometric categorisation or emotion recognition systems are both high-risk and transparency-requiring.
“General purpose AI systems” (GPAI systems) and “general purpose AI models” (GPAI models). GPAI models cover large language models like GPT-4 and Gemini, while GPAI systems are multi-purpose apps built on those models (e.g. ChatGPT). Both GPAI systems and models must come with technical documentation and detailed summaries about their training data.
GPAI models with “systemic risk” must comply with further rules and codes of practice around testing, governance and security. I’ll go into the concept of “systemic risk” (and its potential issues) later in the article.
Systems that don’t fall in the above categories will be unregulated. This should be the case for most AI systems (e.g. AI-enabled video games or spam filters).
Open source models will be exempt from the EU AI Act unless they are integrated into a prohibited or high risk system, or are GPAI models with systemic risk.
The Act imposes significant sanctions for non-compliance, which can include fines of up to 7% of global annual turnover per violation for the most serious infringements - or up to 35 million euros, whichever amount is higher.
Has the EU made the right move?
Overall, I thought the EU AI Act represents an intuitive risk-based approach to regulating AI with many ex ante requirements to protect society from harmful and dangerous AI applications. In fact, the Act is often seen as an exemplar of the ‘risk based’ regulation, inspiring similar thinking in Canada, Brazil, Peru, South Korea and Thailand. For that reason, the Act is a respectable move.
However, I do have some questions about its practical impact.
What’s under the paywall?
I dive into the latest legislative text and explore the practical implications:
💡 The 'big picture' behind the AI Act - what many commentators overlook
💡 Definition of "AI system" - a potential loophole?
💡 Extra-territorial application - it's broader than you think
💡 GPAI model rules - does the 10^25 FLOP threshold work?
💡 Provider v deployer responsibility - the issue with "significant modification"
💡 The AI Liability Directive - what this means for documentation practices?
💡 Will the AI Act stifle innovation? 4 factors to consider
💡 The AI Act is NOT the "next GDPR"
The big picture
But before I provide any critique, I just want to remind everyone (myself included) that the field of AI regulation requires a ‘big picture’ perspective.