On March 13, 2024, the EU Parliament voted to pass the EU’s much-discussed AI Act (with 523 votes in favor, 46 against and 49 abstentions). For an insight into the AI Act’s progression through the EU lawmaking system, see our earlier posts: here, here and here.
The legal text of the regulation will now be reviewed before it is endorsed by the EU Council and then finally adopted. The AI Act will come into force 20 days after the final text is published in the Official Journal of the European Union, with a phased implementation timeline.
Summary of key provisions:
- Application. The AI Act applies to “providers” and “deployers” of AI systems that are established or located in the EU, that place AI systems on the EU market, or where the output produced by the party’s AI system is used in the EU. The AI Act also defines “importers” and “distributors” of AI systems. It will be important that organizations correctly identify the role they play in the AI ecosystem so that they can map their compliance requirements accordingly.
- Classification. The AI Act adopts a classification system with AI systems classified as Prohibited, High-Risk or Unclassified. Prohibited AI systems, such as certain sensitive biometric categorization systems, facial recognition databases from untargeted scraping of CCTV or images online, and social scoring systems will be banned under the AI Act, while High-Risk systems will be subject to stringent compliance obligations such as conformity assessments, registration on an EU database, and documented post-market monitoring. High-Risk systems are listed in an Annex to the AI Act and include emotion recognition systems, biometric identification systems, and AI systems used in employment decision-making.
- General Purpose AI. Separate obligations apply in the AI Act to General Purpose AI Models (AI models that display significant generality and are capable of competently performing a wide range of distinct tasks). These obligations increase where the General Purpose AI Model is deemed to present a “Systemic Risk” by meeting certain criteria, including in relation to the computing power used in training the model.
- Transparency. Irrespective of any classification (including AI systems that are Unclassified), the AI Act includes overarching transparency obligations that apply to AI systems designed to interact with human users directly. Such systems must be designed and developed in such a way that it is obvious to users that they are interacting with an AI system. Synthetic audio, image, text or video content generated by an AI system must also be marked in a machine-readable format and detectable as generated by AI.
- Implementation Timeline. As stated above, the AI Act will come into force 20 days after it is published. The operative provisions will then apply as follows:
- 6 months: the ban on Prohibited AI systems goes into effect;
- 12 months: obligations on providers of General Purpose AI models go into effect;
- 24 months: remaining obligations go into effect; and
- 36 months: obligations on High-Risk AI systems used as a safety component in certain of products or AI systems listed in Annex II (such as medical devices planes, cars, etc.) go into effect.
To the extent that they have not already, organizations should begin preparing for the AI Act by identifying the extent to which it will apply to their business, documenting and categorizing AI use within the organization, and determining the corresponding obligations under the AI Act.
RELATED ARTICLES
EU Reaches Agreement on New “AI Act”: The World’s First Comprehensive AI Law