Final Draft of EU AI Act Leaked

neub9
By neub9
4 Min Read

Leaked Text of EU Artificial Intelligence Act Reveals Divergence from Original Proposal

On January 22, 2024, a leaked draft of the final text of the EU Artificial Intelligence Act (“AI Act”) was made public, revealing significant deviations from the original 2021 proposal by the European Commission. The leaked text incorporates elements from both the European Parliament’s and the Council’s proposals.

Key Definitions

The AI Act provides comprehensive definitions for important terms. For example, it defines an “AI system” as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” The definition aligns with that proposed by the European Parliament and the Organization for Economic Co-operation and Development.

The term “general-purpose AI system” is separately defined as an “AI system which is based on a general-purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.”

Classification of AI Systems

The AI Act introduces a risk-based legal framework for AI within the European Union, classifying AI systems as follows:

1. Prohibited AI Systems: AI systems that present unacceptable risks to individuals’ fundamental rights would be prohibited under the AI Act. Examples include AI systems for social scoring based on personal characteristics, exploring vulnerabilities that result in significant harm, and engaging in facial recognition databases from untargeted scraping of images.

2. High Risk AI Systems: AI systems posing a high risk to individuals’ rights and freedoms will be subject to stringent rules.

3. Transparency Risks: AI systems that pose transparency risks will be subject to specific transparency requirements under the AI Act, such as AI systems that interact as humans or generate content.

4. Generative AI Models: The AI Act places specific obligations on providers of generative AI models on which general-purpose AI systems are based. Providers of generative AI models posing systemic risk will be subject to additional requirements, including an obligation to ensure an adequate level of cybersecurity protection.

High-Risk AI Systems

High-risk AI systems are further divided into two subsets under the AI Act:

1. Annex II: AI systems considered high-risk due to their coverage by certain EU harmonization legislation, such as laws on the safety of toys, machinery, radio equipment, civil aviation, and motor vehicles, among others.

2. Annex III: AI systems classified as high-risk under Annex III of the AI Act itself, including systems in biometric, critical infrastructure, education, employment, and access to essential private or public services.

Obligations Applicable to High-Risk AI Systems

Providers of high-risk AI systems must comply with strict requirements, including establishing, implementing, and documenting risk management systems. Data governance, technical documentation, record-keeping, and registration obligations are also imposed.

Deployers of high-risk AI systems will also have direct obligations, though more limited in scope, such as assigning human oversight and ensuring the relevance of input data.

HTML Tags:
Since the text does not contain HTML tags, it seems they are not relevant to the content.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *