Politic agreement on the EU AI Act

And now the longer version, based on information I have to date (15.12.2023). For the lazy ones, here is the short version

On the 8th of December, Friday, the institutions of the European Union (EU) successfully concluded negotiations on the essential terms and components of the Artificial Intelligence (AI) Act after months of rigorous discussions. The AI Act stands as a milestone in global AI regulation, epitomizing the EU’s commitment to taking the lead in advocating for a comprehensive legislative approach that fosters the trustworthy and responsible utilization of AI systems. In line with significant EU digital legislation like the General Data Protection Regulation (GDPR), the Digital Services Act, the Digital Markets Act, the Data Act, and the Cyber Resilience Act, the AI Act further contributes to shaping the digital landscape.

This document delineates the crucial aspects of this significant political agreement and offers an overview of some of the Act’s tiered compliance obligations. It’s worth noting that certain technical aspects of the AI Act text are still undergoing finalization.

The EU AI Act in details

Objectives

  1. To ensure that AI systems placed on the EU market are safe and respect existing EU law
  2. To ensure legal certainty to facilitate investment and innovation in AI
  3. To enhance governance and effective enforcement of EU law on fundamental rights and safety requirements applicable to AI systems
  4. To facilitate the development of a single market for lawful, safe and trustworthy AI applications, and prevent market fragmentation

source

Who will be impacted by the AI Act?

  • The AI Act is horizontal: It is applicable to all AI systems that affect individuals in the EU, regardless of whether they are developed and operated within the EU or elsewhere. This applicability spans across all sectors.
  • The AI Act entails varied obligations for all participants in the full AI value chain.

The AI Act, with its extensive scope, imposes significant obligations throughout the value chain, prioritizing the impact of AI systems on individuals, particularly their well-being and fundamental rights. Notably, it incorporates extraterritorial measures, affecting any business or organization providing an AI system that impacts individuals within the EU, regardless of the organization’s headquarters.

The AI Act will be applicable to the following entities (refer to the appendix section below for full definitions of terms):

  • Providers introducing AI systems to the EU market, irrespective of their location
  • Providers and deployers of AI systems situated in non-EU countries, where the AI system’s output is utilized within the EU
  • Deployers of AI systems located within the EU
  • Importers and distributors placing AI systems on the EU market
  • Product manufacturers introducing products with AI systems to the EU market under their own name or trademark

However, the AI Act will not be applicable to:

  • Public authorities in non-EU countries and international organizations with law enforcement and judicial cooperation agreements with the EU, provided adequate safeguards are in place
  • AI systems used for purposes beyond the scope of EU law-making authority, such as military or defense
  • AI systems specifically developed and utilized solely for scientific research and discovery
  • Research, testing, and development activities related to AI systems before their introduction to the market or deployment
  • Free and open-source software, unless their usage would categorize them as a prohibited or high-risk AI system.

What are the key features of the AI Act?

  • Definition of AI: The AI Act adopts a comprehensive definition of an AI system, drawing from the recently updated Organisation for Economic Co-operation and Development definition (OECD).

    An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

    This is still subject to changes…

  • Risk-based approach focusing on use cases: The obligations are primarily determined by the level of risk posed by the use (or potential use) of an AI system, rather than the technology upon which it is based.
  • General Purpose AI systems are treated separately due to the extensive range of potential use cases.

Risk classification system of EU AI Act

Risk classification system:

The AI Act establishes a tiered compliance framework comprising different risk categories, each with distinct requirements. All AI systems must undergo inventory and assessment to determine their risk category and subsequent responsibilities.

  • Prohibited systems: Systems that pose an unacceptable risk to safety, security, and fundamental rights will be prohibited from use in the EU.
  • High-risk AI systems: These systems will bear the majority of compliance obligations (alongside GPAI systems – see below), including the establishment of risk and quality management systems, data governance, human oversight, cybersecurity measures, post-market monitoring, and maintenance of required technical documentation. (Additional obligations may be specified in subsequent AI regulations for healthcare, financial services, automotive, aviation, and other sectors.)
  • Transparency Systems: They systems will have to follow voluntary codes of conducts
  • Minimal risk AI systems: After the initial risk assessment and transparency requirements for certain AI systems, the AI Act imposes no additional obligations on these systems but encourages companies to voluntarily commit to codes of conduct.
Pre-market conformity assessments

for high-risk AI systems: High-risk systems will need a conformity assessment to demonstrate compliance before entering the market:

  • The application of harmonized standards (currently under development, see below) will enable AI system providers to demonstrate compliance through self-assessment.
  • In limited cases, a third-party conformity assessment by an accredited independent assessor (“notified body” like TÜV SÜD for e.g.) will be required.
General purpose AI systems (GPAI),

including foundation models and generative AI: These advanced models and systems will be regulated through a separate tiered approach, with additional obligations for models posing a “systemic risk.”

Sandboxes:

Measures to support innovation: Regulatory “sandboxes” will be available across the EU for operators (especially small and medium enterprises) to access voluntarily. Here, they can innovate, experiment, test, and validate the compliance of their AI systems with the AI Act in a secure environment.

Interface with other regulations and penalties up to 7%

  • Interaction with other EU laws: Obligations under the AI Act must be integrated into existing compliance processes established for implementing EU laws, such as those related to product safety, privacy, and financial services.
  • Enforcement and penalties: National competent authorities will possess enforcement powers with the ability to impose significant fines depending on the level of noncompliance.
  • For the use of prohibited AI systems, fines may amount to up to 7% of worldwide annual turnover (revenue), while noncompliance with requirements for high-risk AI systems may result in fines of up to 3% of the same.
No-Compliance categoryFine
Breach of AI Act prohibitionsFines up to €35 million or 7% of total worldwide annual revenue, whichever is higher (see note)
Non-compliance with the obligations set out for providers of high-risk AI systems, authorized representatives, importers, distributors, users or notified bodiesFines up to €15 million or 3% of total worldwide annual revenue, whichever is higher (see note)
Supply of incorrect or misleading information to the notified bodies or national competent authorities in reply to a requestFines up to €7.5 million or 1.5% of total worldwide annual revenue, whichever is higher
notes:In the case of small and medium enterprises, fines will be as described above, but whichever amount is lower.

When will the EU Artificial Intelligence Regulation take effect?

The AI Act is expected to come into force between Q2 and Q3 of 2024, with prohibitions being enforced six months after that date. Some GPAI obligations may take effect after 12 months, although the details are yet to be officially confirmed. All other obligations will apply after 24 months.

WhenWhat
Q2.2024expected to enter into force
Right after thatThe European Commission will begin work to establish the AI Office (EU oversight body) while Member States make provisions to establish AI regulatory sandboxes.
Implementation periodThe European Commission will launch the AI Pact, allowing organizations to work voluntarily with the Commission to start to meet AI Act obligations ahead of the legal deadlines (see relevant section below regarding the AI Pact).
6 months after entering into forceProhibitions will become effective
1 year afterSome requirements for GPAI models may come into effect
2 years afterAll other AI Act requirements will come into effect

What actions should organisations take?

  • Compile an inventory of all AI systems developed or deployed and determine if any fall within the AI Act’s scope.
  • Evaluate and categorize in-scope AI systems to determine their risk classification and identify applicable compliance requirements.
  • Understand the organization’s position in relevant AI value chains, the associated compliance obligations, and how these obligations will be met. Compliance should be integrated into all functions responsible for AI systems throughout their lifecycle along the value chain.
  • Consider other questions, risks (e.g., interaction with other EU or non-EU regulations, including data privacy), and opportunities (e.g., access to AI Act sandboxes for innovators, small and medium enterprises, and others) posed to the organization’s operations and strategy by the AI Act.
  • Develop and execute a plan to ensure that appropriate accountability and governance frameworks, risk management and control systems, quality management, monitoring, and documentation are in place when the Act comes into force.

A couple of defitions on stakeholders in the European Union Artificial Intelligence Regulation

AI Act termAI Act definition
ProviderA natural or legal person, public authority, agency, or other body that is or has developed an AI system to place on the market, or to put into service under its own name or trademark.
DeployerA natural or legal person, public authority, agency, or other body using an AI system under its authority.
Authorized representativeAny natural or legal person located or established in the EU who has received and accepted a mandate from a provider to carry out its obligations on its behalf.
ImporterAny natural or legal person within the EU that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the EU.
DistributorAny natural or legal person in the supply chain, not being the provider or importer, who makes an AI system available in the EU market.
Product manufacturerA manufacturer of an AI system that is put on the market or a manufacturer that puts into service an AI system together with its product and under its own name or trademark.
OperatorA general term referring to all the terms above (provider, deployer, authorized representative, importer, distributor, or product manufacturer).

And a lot more of Europe 🙂

At an EU level, the AI Act creates:

  • AI Office within the EU Commission, but with functional independence. This new body will have oversight responsibilities for GPAI models. It will contribute to the development of standards and testing practices, coordinate with the national competent authorities and help enforce the rules in Member States
  • AI Board representing the Member States to provide strategic oversight for the AI Office. The Board will support the implementation of the AI Act and regulations promulgated pursuant to it, including the design of codes of practice for GPAI models
  • Scientific panel of independent experts to support the activities of the AI Office. The panel will contribute to the development of methodologies for evaluating the capabilities of GPAI models and their subsequent classification, while also monitoring possible safety risks
  • Advisory forum with representatives of industry and civil society. Will provide technical expertise to the AI Board

Welcome in en extended area of Bureaucracy.

How are AI systems classified?

The AI Act establishes compliance obligations by assessing the inherent risks associated with the specific applications for which AI systems are employed.

General-purpose AI systems (GPAI), encompassing foundation models and generative AI systems, adhere to a distinct classification framework. Please refer to the relevant section below for further details.

The EU AI Act risk classification in details:

ClassificationDescriptioncompliance levelExamples
Prohibited AI systemsProhibited because uses pose an unacceptable risk to the safety, security and fundamental rights of people.ProhibitionIncludes use of AI for social scoring which could lead to detrimental treatment, emotional recognition systems in the workplace, biometric categorization to infer sensitive data, and predictive policing of individuals, among other uses. Some exemptions will apply.
High-risk AI systemsPermitted, subject to compliance with the requirements of the AI Act (including conformity assessments before being placed on the market).SignificantIncludes use of AI in recruitment, biometric identification surveillance systems, safety components (e.g., medical devices, automotive), access to essential private and public services (e.g., creditworthiness, benefits, health and life insurance), and safety of critical infrastructure (e.g., energy, transport).
Transparency risk AI systemPermitted, subject to specific transparency and disclosure obligations where uses pose a limited risk.LimitedCertain AI systems that interact directly with people (e.g., chatbots), and visual or audio “deepfake” content that has been manipulated by an AI system.
Minimal risk AI systemsPermitted, with no additional AI Act requirements where uses pose minimal risk.MinimalBy default, all other AI systems that do not fall into the above categories (e.g., photo-editing software, product-recommender systems, spam filtering software, scheduling software)

Prohibited Systems: when is a risk unacceptable?

The AI Act expressly prohibits AI systems that present unacceptable risks, capable of undermining an individual’s fundamental rights or causing them physical or psychological harm. These prohibitions encompass:

  • AI systems employing vulnerabilities or subliminal techniques to manipulate individuals or specific groups (e.g., children, the elderly, or people with disabilities), thereby circumventing users’ free will and likely causing harm.
  • AI systems utilized for social scoring, assessment, or classification of individuals based on their social behavior, inferred or predicted, or personal characteristics, resulting in adverse treatment.
  • AI systems employed to infer emotions in the workplace (e.g., human resource functions) and educational institutions, with exemptions for safety systems.
  • Biometric categorization to infer sensitive data, such as race, sexual orientation, or religious beliefs.
  • Unrestricted and non-targeted scraping of facial images from the internet or CCTV to populate facial recognition databases.
  • Predictive policing of individuals, defined as predicting individual behavior, including the likelihood of offense or re-offense.
  • Law enforcement use of real-time remote biometric identification (RBI) systems in publicly accessible spaces (with specific exceptions).
Exceptions for Law Enforcement:

There are exceptions for the use of RBI systems in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorization and strictly defined lists of criminal offenses.

High-risks Systems: which use cases are subject to conformity assessments and obligations?

The AI Act identifies potential high-risk applications in Annex II and Annex III. The European Commission has the authority to revise these annexes as new applications and associated risks emerge. The current list of identified high-risk applications includes:

AI systems applied in contexts carrying a significant risk of harm to health, safety, or fundamental rights, such as:

  • Biometric identification and categorization of individuals
  • Management and operation of critical infrastructure, particularly safety components of traffic, water, gas, heating, and electricity infrastructure
  • Education and vocational training, particularly systems determining access to education and assessing students
  • Employment, worker management, and access to self-employment, encompassing recruitment and performance monitoring
  • Access to and enjoyment of essential private and public services and benefits, including eligibility for benefits, creditworthiness evaluation, and life and health insurance pricing
  • Law enforcement applications like data analytics systems for assessing evidence of criminal activity, such as financial fraud detection systems
  • Migration, asylum, and border control management, covering monitoring migration trends, border surveillance, verification of travel documents, and examination of applications for visas, asylum, and residence permits
  • Administration of justice and democratic processes, including researching and interpreting the law

Exceptions

Exceptions to the high-risk classification include situations where an AI system:

  1. Performs a narrowly defined procedural task with no direct safety or security implications
  2. Is designed to review or enhance the quality of human output
  3. Is utilized to identify decision-making patterns (or deviations from existing patterns to highlight inconsistencies) without directly influencing decisions

High-risks Systems: what are the obligations for providers of High risks articificial inteliigence system in Europe?

General obligations and responsibilities:

Mandatory requirements for high-risk AI systems encompass:

  1. Establishing and maintainting approprioate AI risk and AI quality management systems
  2. Implementing effective data governance practices
  3. Maintaining adequate technical documentation and record-keeping procedures
  4. Ensuring transparency and furnishing information to users
  5. Facilitating and conducting human oversight
  6. Adhering to standards for accuracy, robustness, and cybersecurity relevant to the intended purpose
  7. Registering high-risk AI systems on the EU database before introducing them to the market; systems employed for law enforcement, migration, asylum, and border control, as well as critical infrastructure, will be recorded in a non-public section of the database

Pre-market conformity assessment for high-risk systems

Providers are required to conduct a conformity assessment on the high-risk AI system before its market introduction. The assessment should ascertain compliance with the aforementioned requirements.

In most instances, providers may conduct self-assessment if:

  • They employ procedures and methodologies aligned with EU-approved technical standards (harmonized standards)
  • Application of standards presum conformity

A third-party conformity assessment by an accredited body becomes necessary if any of the following conditions apply:

  • The AI system is part of a safety component subject to third-party assessment under sectoral regulations
  • The AI system is part of a biometric identification system
  • Harmonized standards are not utilized

Post-market responsibilities

Following the market introduction of a high-risk AI system, providers are obligated to ensure ongoing safe performance and conformity throughout the system’s lifecycle. These responsibilities include:

  • Maintaining logs generated by high-risk systems, within their control, for a minimum of six months
  • Promptly taking corrective actions for nonconforming systems already in the market and informing other operators in the value chain of such instances
  • Collaborating with national competent authorities or the AI Office (as detailed in the relevant section below) by sharing all necessary information and documentation upon receiving a reasonable request
  • Monitoring the performance and safety of AI systems throughout their lifespan and actively evaluating continuous compliance with the AI Act
  • Reporting to the appropriate authorities any serious incidents and malfunctions leading to breaches of fundamental rights
  • Undergoing new conformity assessments for substantial modifications (e.g., changes to a system’s intended purpose or alterations affecting regulatory compliance):
    • This applies irrespective of whether the changes are implemented by the original provider or any third party.
    • For AI systems deemed to have limited or minimal risk, it is crucial to verify whether the original risk classification remains applicable after any changes

High-risks AI: What are the obligations for deployers, importers and distributors of high-risk AI systems?

Deployers:

Applicability to both public bodies and private entities offering services of general interest (such as banks, insurers, hospitals, and schools employing high-risk systems):

  • Incorporating human oversight by individuals possessing the necessary training and competence
  • Ensuring the relevance of input data to the system’s intended use
  • Halting the system’s use if it poses a risk at the national level
  • Reporting any significant incidents to the AI system provider
  • Preserving automatically-generated system logs
  • Adhering to pertinent registration requirements when the user is a public authority
  • Complying with GDPR obligations for conducting a data protection impact assessment
  • Verifying the AI system’s compliance with the AI Act and providing evidence of all relevant documentation
  • Informing individuals about the potential use of high-risk AI

Importers and distributors:

Before introducing a high-risk AI system to the market, importers and distributors are responsible for confirming the system’s compliance with the AI Act, ensuring the provision of all relevant documentation, and maintaining communication with the provider and market surveillance authorities accordingly.

Transparency Risk AI Systems: Obligations:

Providers

For specific AI systems with minimal transparency obligations, providers must design and develop systems to ensure that users understand they are interacting with an AI system from the outset (e.g., chatbots).

Deployers

Deployers must:

  • Inform individuals and obtain their consent when using permissible emotion recognition or biometric categorization systems.
  • Disclose and clearly label instances where visual or audio “deep fake” content has been manipulated by AI.

How will general-purpose AI be regulated?

The final definition in the AI Act of general-purpose AI (GPAI) models has not yet been made available. Yet it wil look like this:

‘General-purpose AI model’ means an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is released on the market and that can be integrated into a variety of downstream systems or applications.

The AI Act adopts a tiered approach to compliance obligations, differentiating between high-impact GPAI models with systemic risk and other GPAI models.

TierDescriptionCompliance level
Base-level tierModels meeting the GPAI definitionLimited transparency obligations
Systemic risk tierHigh-impact GPAI models posing a systemic risk are provisionally identified based on cumulative amount of computing power used for training (with power greater than 10^25 floating point operations [FLOPs])Significant obligations

Providers of all GPAI models are mandated to:

  • Maintain current technical documentation.
  • Share information with downstream providers planning to integrate the GPAI model into their AI systems.
  • Adhere to EU copyright law.
  • Disseminate concise summaries of the content used for training.

Providers of high-impact GPAI models with systemic risk implications must:

  • Conduct model evaluations.
  • Address and mitigate systemic risks.
  • Document and report any significant incidents and the corrective measures taken to the European Commission.
  • Implement adversarial training of the model (i.e., “red-teaming”).
  • Ensure the presence of adequate cybersecurity and physical protections.
  • Document and report the estimated energy consumption of the model.

To maintain adaptability to swift GPAI technology advancements, the AI Office will:

  • Revise the criteria for designating high-impact GPAI, potentially incorporating factors such as the number of model parameters, dataset quality or size, and the count of registered business or end users.
  • Facilitate the development of codes of practice to support the application of compliance requirements.

How will new standards be developed and when will they be ready?

To reduce compliance burdens and speed up time-to-market, the AI Act allows for compliance self-assessment, provided the obligations are met using European Commission-approved industry best practices as formalized in “harmonized standards”.

  • The European Commission has issued a “standardization request” to the European standards bodies (CEN and CENELEC), listing a series of topics for which new harmonized standards are required to cover the compliance obligations in the AI Act (see section on pre-market obligations of high-risk AI systems above).
  • The European standardization bodies aim to have standards available in time for implementation of the AI Act in accordance with the agreed timelines (see above), but their readiness is not guaranteed.
  • Where possible the European standardization bodies will seek to adopt standards created by the international standards bodies (ISO and IEC), with minimal modification.

You can have a look at ISO 42001 here (ISO/IEC 42001:2023)

How will the AI Act interface with current legislation and standards?

  • AI providers are required to maintain compliance with all pertinent EU laws while integrating the stipulations of the AI Act.
  • Providers can integrate AI Act requirements with existing procedures to prevent redundancy and streamline the compliance process.
  • The AI Act should be incorporated into applicable EU laws (e.g., financial services regulations) when relevant. Sectoral regulators will be appointed as the competent authorities responsible for overseeing the implementation of the AI Act within their sector.

How does the AI Act aim to support AI innovation in the EU?

Real-world testing

Testing of AI systems in real-world conditions outside of AI regulatory sandboxes may be conducted by providers or prospective providers of the high-risk AI systems listed in Annex III of the AI Act (see above), at any time before being placed on the market, if the following conditions are met:

  • A testing plan has been submitted to, and approved by the market surveillance authorities
  • The provider is established in the EU
  • Data protection rules are observed
  • Testing does not last longer than necessary and no more than six months (with the option to extend by an additional six months)
  • End users have been informed, given their consent and have been provided with relevant instructions
  • The predictions, recommendations and decisions of the AI system can be effectively reversed or disregarded

AI regulatory sandboxes

The AI Act mandates the establishment of AI regulatory sandboxes to offer innovation support across the EU.

  • These regulatory sandboxes are controlled environments in which providers and deployers (e.g., small and medium enterprises) can voluntarily experiment, test, train, and validate their systems under regulatory supervision before placing them on the market.
  • Each Member State will be expected to create or join a sandbox with common rules for consistent use across the EU.
  • AI system providers will be able to receive a written report about their sandbox activities as evidence that they have met AI Act requirements. This is intended to speed up the approval process to take AI systems to market.

What’s next?

Beyond the fact that this is enough writting for a single post, not enough images etc…, and that this might kill my already poor SEO score, that is what’s next:

The EU AI Act’s next steps involve ongoing refinement of any remaining technical details by officials from the EU institutions in the coming weeks. Following agreement on the final text, it will be presented to the European Parliament and Council for approval in the first half of 2024.

Once the approved text is translated into the official languages of the EU, it will be published in the Official Journal, and the AI Act will become effective 20 days after publication, initiating the implementation period.

On an international level, the European Commission and other EU institutions will persist in collaborating with multinational organizations, including the Council of Europe, the U.S.–EU Trade and Technology Council (TTC), the G7, the OECD, the G20, and the UN. Their collective efforts aim to advance the development and adoption of rules beyond the EU that align with the requirements of the AI Act.

The EU AI Pact

The European Commission is launching the AI Pact, which seeks the voluntary commitment of industry to start implementing the requirements of the AI Act ahead of the legal deadline:

  • Commitments will take the form of pledges that will be published by the EU Commission.
  • The AI Pact will convene key EU and non-EU industry actors to exchange best practices.
  • Interested parties will meet in the first half of 2024 to collect ideas and best practices that could inspire future pledges.

About Author