The EU AI Act: A Global Reference For AI Liability?
Introduction
On 21 May 2024, the Council of the European Union approved the EU AI (Artificial Intelligence) Act, which will come into force after a 24-month lead-in period. The AI Liability Directive seeks to provide individuals with the same level of protection as other technologies in the EU in cases of “serious incidents” and “reasonably foreseeable misuse” of AI systems.
Details of the act
It is important to identify the core definitions contained within the Act.
An AI system is defined as “a machine-based system that is designed to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
A Provider refers to a legal person, public authority, or body that has developed an AI system.
A Deployer is a legal person, public authority, or other body that utilises an AI system under the authority of the provider.
The Act also distinguishes between different “types” of AI. Classification is important, as different obligations are imposed on the individuals depending on what type of AI system they are dealing with.
High-Risk Ai
An AI system is deemed high-risk when it poses a significant threat of harm to individuals.
The obligations of Providers are contained within Articles 17 and 20 of the Act. According to these sections, Providers are required to (1) establish a quality management practice overseeing the AI system, and (2) promptly take corrective action, or recall a system that has been placed on the market should they become aware that it does not comply with the Act.
Article 23 identifies the obligations of Importers: an Importer should not place the system on the market if there is sufficient reason to believe that it is classified as high-risk and therefore does not conform with this Regulation.
Article 24 requires Distributors of AI technology to verify that a high-risk AI system complies with the Act's regulations before it can be placed on the market.
Finally, Article 26 necessitates Deployers to take “appropriate technical and organisational measures” to ensure that they use such systems in accordance with the Act's instructions.
General-purpose AI models
‘General-purpose AI models’ are defined in Article 3(63). They are systems which “demonstrate significant generality and are capable of competently performing a wide range of distinct tasks”.
The obligations imposed on Providers in relation to general-purpose systems are contained in Article 53. Providers are required to (1) keep up-to-date with all available documents and information regarding the AI systems and provide them to the AI Office or competent authorities upon request, (2) possess a thorough understanding of the capabilities and limitations of the model, (3) take measures to comply with copyright and relevant EU legislation, and (4) publish a detailed summary of the content used for training the general-purpose AI model using the template provided by the AI Office.
General-purpose AI models with systemic risk
‘Systemic risk’ is defined in Article 3(65). It refers to a risk that is specific to the high-impact capabilities of general-purpose AI models, or due to actual or reasonably foreseeable negative effects on society as a whole.
Pursuant to Article 55, Providers are obligated to (1) assess and mitigate potential systemic risks, (2) maintain records of any relevant documents and information concerning serious incidents and report them to the AI Office or national authorities without undue delay, and (3) ensure an adequate level of cybersecurity protection for the model.
A step towards uniform AI liability?
Mathieu Michel, the Belgian Minister for Digitalisation, stated: “This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies.”
However, given that the legal enforceability of the Act is limited to the 27 Member States of the EU, it remains uncertain whether other third countries will adopt similar measures. Nonetheless, the Act is considered a blueprint for AI liability as it establishes its own standards.
It is anticipated that once the Act is implemented, AI providers will become more vigilant about the quality of AI available to the public. Consequently, the number of people harmed by AI will decrease. Hence, AI is expected to become a safer tool for public use, courtesy of these detailed regulations.
conclusion
The rationale behind the EU AI Act is that by imposing obligations on those operating within the AI industry, the risks associated with the technology will diminish.
The 24-month delay in enforcement is imperative. It will give the European Parliament ample time to develop a legal framework governing AI within the EU. Therefore, the legislature will be able to elevate the security standards of those using AI technology. Furthermore, it is anticipated that this two-year window will encourage other nations to take appropriate time to design their own AI legislation.
By Agnes Tsang