The European Union (EU) has made steady progress in shaping its proposed AI law, known as the “AI Act.” With the European Parliament approving its preferred version, the AI Act has now entered the final stage of the legislative process (a three-way negotiation, known as “trilogue”). The aim is to agree to a final version of the law by the end of 2023. The EU’s objective is to ensure that AI developed and utilized within Europe aligns with the region’s values and rights, including ensuring human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being.
History
Initially proposed in 2021 (see our previous post), the AI Act has encountered intense lobbying efforts and numerous proposed amendments, surpassing even the volume received for the EU’s GDPR according to reports. This led to delays in the legislative process, which were also contributed to by the emergence of powerful large language models (LLMs) such as ChatGPT and Google Bard, among others, which required lawmakers to reconsider certain aspects of the draft law.
Risk-Based Approach
The proposed AI Act classifies AI usage based on risk levels, imposing stringent monitoring and disclosure requirements for high-risk applications compared to lower-risk ones. In particular, the European Parliament designated the following additional uses of AI as “high-risk” (amongst others):
- making inferences about personal characteristics based on biometric data, including emotion recognition systems (other than for identity verification);
- safety components used in the supply of water, gas, heating, electricity and critical digital infrastructure;
- monitoring and detecting prohibited behavior of students during tests in education or training;
- recruitment or selection, including for placing targeted job advertisements and screening applicants;
- evaluating creditworthiness or establishing a person’s credit score, except for detecting financial fraud;
- making decisions or materially influencing decisions on the eligibility of natural persons for health and life insurance;
- influencing the outcome of an election or referendum or voting behavior; and
- recommender systems used by social media platforms with over 45 million users.
There are also proposals to completely ban some uses of AI, including the following (amongst others):
- “Real-time” remote biometric identification systems in publicly accessible spaces (i.e., systems in which the capturing of biometric data (e.g., video footage), the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay);
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization (i.e., systems in which the biometric data has already been captured and the comparison and identification occur only after a significant delay);
- biometric categorization systems using sensitive characteristics (e.g., gender, race, ethnicity, citizenship status, religion and/or political orientation);
- predictive policing systems (based on profiling, location and/or past criminal behaviour);
- emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
Obligations for General Purpose AI
In response to the rise of LLMs, the European Parliament has introduced two new concepts into the draft law, namely:
- General Purpose AI System. An AI system that can be used in or adapted to a wide range of applications for which it was not intentionally and specifically designed; and
- Foundation Model. An AI model trained on broad data at scale, designed for its generality of output and that can be adapted to a wide range of distinctive tasks.
The European Parliament introduced additional requirements for providers of foundation models, which are essentially pre-trained AI systems used as a foundation for developing other AI systems. The recitals of the AI Act note that foundation models are a recent development, in which AI models are developed from algorithms designed to optimize for generality and versatility of output. Those models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained. These models hold growing importance to many downstream applications and systems.
According to the new article 28b, providers of foundation models are now obligated to identify and reduce/mitigate risk posed by such models to health, safety, fundamental rights, the environment, democracy and the rule of law, prior to and throughout development, and to register foundation models in an EU database. Providers must also ensure that the models meet comprehensive design and development requirements (which involve model evaluation with independent experts, documented analysis, and extensive testing), prepare detailed technical documentation and clear instructions for downstream providers and retain the same for 10 years, and disclose information about the model’s characteristics, limitations, assumptions and risks.
Providers of foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio or video (generative AI) and providers who specialize a foundation model into a generative AI system, are also required to: (i) comply with transparency obligations; (ii) train, and where applicable, design and develop the foundation model in such a way as to ensure adequate safeguards against the generation of content in breach of EU law in line with the prevailing technological advancements and without prejudice to fundamental rights, including freedom of expression; and (iii) document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.
Supporting Innovation and Protecting Citizens’ Rights
To stimulate AI innovation and support small businesses, the European Parliament has included exemptions for research activities and AI components provided under open-source licenses. The proposed AI Act encourages the establishment of regulatory sandboxes for testing AI before deployment. It also emphasizes individuals’ rights to file complaints about AI systems and receive explanations for decisions made by high-risk AI systems that significantly impact their fundamental rights. The European Parliament has also proposed expanding the role of the so-called “EU AI Office” (created by the AI Act) to monitor the implementation of AI rules.
What Next?
Once the final version of the AI Act is agreed upon and enters into force, there will likely be a 24-month transition period before it applies (likely late 2025 as things stand). Businesses and public organizations that produce, distribute or use AI systems should, of course, take steps now to identify risks associated with the use of AI in their operations. Understanding and documenting the AI systems in use and the decisions made in the development of those systems will undoubtedly be key to complying with the new EU regulatory landscape.
On the UK side, on March 29, 2023, it is also worth noting that the UK Government released a Whitepaper outlining its pro-innovation approach to AI regulation. Rather than creating new laws or a separate AI regulator, as things stand, existing sectoral regulators will be empowered to regulate AI in their respective sectors. The focus is on enhancing existing regimes to cover AI and avoiding heavy-handed legislation that could hinder innovation. The proposed framework defines AI based on adaptivity and autonomy and sets out principles for regulators to address AI risks. While initially not legally binding, these principles may be enforced in the future. The UK is also set to establish central functions to support regulators and a regulatory sandbox for AI innovation.
RELATED ARTICLES
Everything in Moderation: Artificial Intelligence and Social Media Content Review