Overview of the Politic agreement on the EU AI Act

“Habemus Actum”

So it happened! And I found some time to write this down for you.

On the 8th of December, Friday, the institutions of the European Union (EU) successfully concluded negotiations on the essential terms and components of the Artificial Intelligence (AI) Act after months of rigorous discussions. The AI Act stands as a milestone in global AI regulation, epitomizing the EU’s commitment to taking the lead in advocating for a comprehensive legislative approach that fosters the trustworthy and responsible utilization of AI systems. In line with significant EU digital legislation like the General Data Protection Regulation (GDPR), the Digital Services Act, the Digital Markets Act, the Data Act, and the Cyber Resilience Act, the AI Act further contributes to shaping the digital landscape.

This document delineates the crucial aspects of this significant political agreement and offers an overview of some of the Act’s tiered compliance obligations. It’s worth noting that certain technical aspects of the AI Act text are still undergoing finalization.

What you need to know in short

Objectives

  1. To ensure that AI systems placed on the EU market are safe and respect existing EU law
  2. To ensure legal certainty to facilitate investment and innovation in AI
  3. To enhance governance and effective enforcement of EU law on fundamental rights and safety requirements applicable to AI systems
  4. To facilitate the development of a single market for lawful, safe and trustworthy AI applications, and prevent market fragmentation

 

Who will be impacted by the AI Act?

  • The AI Act is horizontal: It is applicable to all AI systems that affect individuals in the EU, regardless of whether they are developed and operated within the EU or elsewhere. This applicability spans across all sectors.
  • The AI Act entails varied obligations for all participants in the full AI value chain.

What are the key features of the AI Act?

  • Definition of AI: The AI Act adopts a comprehensive definition of an AI system, drawing from the recently updated Organisation for Economic Co-operation and Development definition (OECD).

    An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

    This is still subject to changes!

  • Risk-based approach focusing on use cases: The obligations are primarily determined by the level of risk posed by the use (or potential use) of an AI system, rather than the technology upon which it is based.
  • General Purpose AI systems are treated separately due to the extensive range of potential use cases.

Risk classification system of EU AI Act

  • Risk classification system: The AI Act establishes a tiered compliance framework comprising different risk categories, each with distinct requirements. All AI systems must undergo inventory and assessment to determine their risk category and subsequent responsibilities.
    • Prohibited systems: Systems that pose an unacceptable risk to safety, security, and fundamental rights will be prohibited from use in the EU.
    • High-risk AI systems: These systems will bear the majority of compliance obligations (alongside GPAI systems – see below), including the establishment of risk and quality management systems, data governance, human oversight, cybersecurity measures, post-market monitoring, and maintenance of required technical documentation. (Additional obligations may be specified in subsequent AI regulations for healthcare, financial services, automotive, aviation, and other sectors.)
    • Transparency Systems: They systems will have to follow voluntary codes of conducts
    • Minimal risk AI systems: After the initial risk assessment and transparency requirements for certain AI systems, the AI Act imposes no additional obligations on these systems but encourages companies to voluntarily commit to codes of conduct.
  • Pre-market conformity assessments for high-risk AI systems: High-risk systems will need a conformity assessment to demonstrate compliance before entering the market:
    • The application of harmonized standards (currently under development, see below) will enable AI system providers to demonstrate compliance through self-assessment.
    • In limited cases, a third-party conformity assessment by an accredited independent assessor (“notified body” like TÜV SÜD for e.g.) will be required.
  • General purpose AI systems (GPAI), including foundation models and generative AI: These advanced models and systems will be regulated through a separate tiered approach, with additional obligations for models posing a “systemic risk.”
  • Sandboxes: Measures to support innovation: Regulatory “sandboxes” will be available across the EU for operators (especially small and medium enterprises) to access voluntarily. Here, they can innovate, experiment, test, and validate the compliance of their AI systems with the AI Act in a secure environment.

Interface with other regulations and penalties up to 7%

  • Interaction with other EU laws: Obligations under the AI Act must be integrated into existing compliance processes established for implementing EU laws, such as those related to product safety, privacy, and financial services.
  • Enforcement and penalties: National competent authorities will possess enforcement powers with the ability to impose significant fines depending on the level of noncompliance.
  • For the use of prohibited AI systems, fines may amount to up to 7% of worldwide annual turnover (revenue), while noncompliance with requirements for high-risk AI systems may result in fines of up to 3% of the same.

When will the EU Artificial Intelligence Regulation take effect?

The AI Act is expected to come into force between Q2 and Q3 of 2024, with prohibitions being enforced six months after that date. Some GPAI obligations may take effect after 12 months, although the details are yet to be officially confirmed. All other obligations will apply after 24 months.

What actions should organisations take?

  • Compile an inventory of all AI systems developed or deployed and determine if any fall within the AI Act’s scope.
  • Evaluate and categorize in-scope AI systems to determine their risk classification and identify applicable compliance requirements.
  • Understand the organization’s position in relevant AI value chains, the associated compliance obligations, and how these obligations will be met. Compliance should be integrated into all functions responsible for AI systems throughout their lifecycle along the value chain. Have a look at ISO 42001:2023
  • Consider other questions, risks (e.g., interaction with other EU or non-EU regulations, including data privacy), and opportunities (e.g., access to AI Act sandboxes for innovators, small and medium enterprises, and others) posed to the organization’s operations and strategy by the AI Act.
  • Develop and execute a plan to ensure that appropriate accountability and governance frameworks, risk management and control systems, quality management, monitoring, and documentation are in place when the Act comes into force.

 

So, this was obviously the short version… Let’s deep dive together in another post, will take

About Author