E.U. adopts a common position on Artificial Intelligence set of rules
EU ministers updated and agreed on the general approach to the AI Act at the Telecom Council meeting on 6 December. Let’s have a look at the main changes.
The AI Act is a legislative proposal to regulate Artificial Intelligence (see other posts on this topic) based on its potential to cause harm. The legislative process is undergoing 2 steps:
- The EU Council is the first co-legislator to finish the first step
- The European Parliament is due to finalize its version around March 2023
This text is one of the targets of the Czech presidency.
- Considers the key concerns of the member states
- Preserves the complex balance between the protection of fundamental rights and the innovation in AI technology
The position of the EU Council on the flagship legislation to regulate Artificial Intelligence was shared on Friday (8 November) with some final last-minute adjustments made by the Czech Presidency.
We summarize and list those for you:
AI Definition is critical, as it impacts the complete law text and its scope.
Member states were concerned that traditional software would be included. Therefore a narrower definition has been agreed upon: systems developed through machine learning, logic- and knowledge-based approaches
In a flexible approach, the Commission can specify or update those elements later via delegate acts.
General purpose AI
General purpose AI: it now comprises large language models that can be adapted to carry out various tasks.
Initially, it did not fall under the scope of the AI Act. The later only envisaged objective-based systems.
However, leaving these critical systems out of the scope would have impacted the AI framework. Especially since this market area is moving fast and needs guidance.
Therefore, an impact assessment and consultation on which to adapt the rules for general purpose AI will be conducted within 1,5 years from the entry into force of the AI Act.
Prohibited practices and social scoring
The AI rulebook bans the use of the technology for:
- Subliminal techniques
- Exploiting vulnerabilities
- Establishing a social scoring
Moreover, the social scoring ban was extended to private actors. Target is to avoid public sector sub-contracting.
Secondly, the concept of vulnerability was also extended to socio-economic aspects.
Annex III lists the uses of AI that are considered at high risk to harm people or properties. Those AI must comply with considerably stricter legal obligations.
Classification fine-tuning: now, to be classified as high-risk, the system should have a decisive weight in the decision-making process. It means purely accessory AI would not fall into the category.
The concept is still vague. The Commission shall define it via implementing act.
Cleaning of the lists and removals:
- The deepfake detection by law enforcement authorities
- Crime analytics
- The verification of the authenticity of travel documents.
Addendum to the list:
- Critical digital infrastructure
- Life insurance
- Health insurance
Monitoring: Moreover, the obligation of high-risk providers to register on an EU database has been extended to public body users (except law enforcement)
The general approach clarifies the allocation of responsibility along the complex AI value chains and interactions with existing sectorial legislation.
The high-risk systems will have to comply with requirements, mainly:
- Dataset’s quality
- Technical documentation
- Monitoring system after launch: for high-risk systems, but not for law enforcement
- Report to the provider in case of serious incidents: for high-risk systems, but not for law enforcement
sensitive information spurring from law enforcement activities.
More generally, there is a general exclusion of AI applications related to national security, defence and military. The capacity for police agencies to use ‘real-time’ remote biometric identification systems in exceptional circumstances remains untouched
Board of Governance
- competent national authorities
- including now usual elements such as the pool of experts.
The penalties for breaching the AI obligations were made lighter for Small & Midsize Companies. A set of criteria have been introduced for national authorities to consider when calculating the sanction.
The AI Act includes following possibilities:
- Regulatory sandboxes
- Controlled environments
under the supervision of an authority where companies can test AI solutions.
The testing shall occur in real world conditions, possibly unsupervised.
The transparency requirements for emotion recognition and deepfakes have been clearly enhanced.