European Parliament Adopts Negotiating Mandate on European Union’s Artificial Intelligence Act; Trilogues Begin

27 Juni 2023

On 14 June 2023, the European Parliament (Parliament) plenary voted on its position on the Artificial Intelligence Act (AI Act), which was adopted by a large majority, with 499 votes in favor, 28 against, and 93 abstentions. The newly adopted text (Parliament position) will serve as the Parliament’s negotiating position during the forthcoming interinstitutional negotiations (trilogues) with the Council of the European Union (Council) and the European Commission (Commission).

The members of Parliament (MEPs) proposed several changes to the Commission’s proposal, published on 21 April 2021, including expanding the list of high-risk uses and prohibited AI practices. Specific transparency and safety provisions were also added on foundation models and generative AI systems. MEPs also introduced a definition of AI that is aligned with the definition provided by the Organisation for Economic Co-operation and Development. In addition, the text reinforces natural persons’ (or their groups’) right to file a complaint about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights.

DEFINITION

The Parliament position provides that AI, or an AI System, should refer to “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.” This amends the Commission’s proposal, where an AI System was solely limited to software acting for human-defined objectives and now encompasses the metaverses through the explicit inclusion of “virtual environments.”

Agreement on the final version of the definition of AI is expected to be found at the technical level during trilogue negotiations, as it does appear to be a noncontentious item.

Another notable inclusion relates to foundation models (Foundation Models) that were not yet in the public eye when the Commission’s proposal was published and were defined as a subset of AI System “trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.”

GENERAL PURPOSE AI

On the basis of the Parliament position, generative AI Systems based on Foundation Models would have to comply with transparency requirements and ensure safeguards against the generation of illegal content. Transparency requirements would include disclosure that content is AI-generated, in order to distinguish so-called “deep-fakes” from real images. Similar to other accountability frameworks, such as the General Data Protection Regulation, Foundation Model providers would be obliged to assess and mitigate possible risks to health, safety, fundamental rights, the environment, democracy, and the rule of law. Additionally, such providers would be required to register their Foundation Models in an EU database before their release on the EU market.

HIGH-RISK AI

Classification of high-risk applications in the Parliament position will now include AI Systems that pose significant harm to people’s health, safety, fundamental rights, or the environment. Additions to the high-risk list originally proposed by the Commission include AI Systems used to influence voters and the outcome of elections, as well as in recommender systems used by social media platforms that are designated as “very large online platforms” under the European Union’s Digital Services Act (see our coverage here). High-risk areas and use cases were made more precise and extended in law enforcement and migration control areas. Additionally, providers and “deployers” (a new category of operators) of AI Systems must meet certain obligations depending on the level of risk the AI System is capable of generating. 

PROHIBITED PRACTICES

AI Systems with an “unacceptable” level of risk to people’s safety would be prohibited (e.g., systems used for social scoring). MEPs expanded the list to include bans on intrusive and discriminatory uses of AI, including the following ones:

  • Real-time remote biometric identification systems in publicly accessible spaces.
  • Post-remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes following judicial authorization.
  • Biometric categorization systems using sensitive characteristics (e.g., gender, race, ethnicity, citizenship status, religion, political orientation).
  • Predictive policing systems (based on profiling, location, or past criminal behavior).
  • Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions.
  • Untargeted scraping of facial images from the Internet or closed-circuit television footage to create facial recognition databases.

Considering the Parliament plenary vote resulted in the rejection of last-minute amendments to reintroduce the Commission’s initial exceptions to the AI Act’s facial recognition ban, as well as the Council’s position on the issue, this is widely expected to be a major point of contention during trilogues. 

INNOVATION

Exemptions for research activities and AI components provided under open-source licenses were added to the Parliament position, which also includes the promotion of regulatory sandboxes (temporary exemption from relevant regulations during a testing period), providing that they are established by public authorities for the purpose of testing AI Systems before they are placed on the market or otherwise put into service.

AI OFFICE

The Parliament position also contemplates the creation of an AI office (AI Office), which would monitor the AI Act’s implementation. MEPs envisage that the AI Office would have legal personality and act in full independence, with strategic direction provided by member states, who will have control of the AI Office through its management board, alongside the Commission, the European Data Protection Supervisor, the European Union Agency for Fundamental Rights, and European Union Agency for Cybersecurity

Stakeholders would be encouraged to formally participate in the work of the AI Office through an advisory forum that would advise the AI Office on matters related to the AI Act. 

NEXT STEPS

The first trilogue meeting took place on 14 June. At this initial meeting, the EU institutions stated their positions and delegated work at the technical level. 

The first operational trilogue is expected to take place on 23 July 2023. Spain, which will take over the presidency of the Council on 1 July 2023, aims to reach a deal on the AI Act before the end of 2023. 

In preparation for trilogues, rapporteurs from the European People’s Party, one of the main political groups in the Parliament, are soliciting feedback from stakeholders on the Parliament’s AI Act position. Stakeholders can send feedback by email to the rapporteurs’ offices. The informal consultation will be conducted in two phases: comments related to the first-phase items (i.e., High-risk AI Systems (Articles 30–51), Innovation, Database, Codes of Conduct, Penalties, Delegation, and Final Provisions) must be submitted by 30 June 2023, whereas comments related to the second-phase items (Recitals 1–89, General Provisions, Prohibited AI, High-risk AI Systems (Articles 6–29), Transparency, Governance, and Enforcement) must be submitted by 31 July 2023. 

The consultation gives interested stakeholders a rare and important opportunity to provide their feedback at an advanced stage of the legislative process, and impacted businesses should take advantage of it to make an impact on this seminal piece of legislation. 

Our Policy and Regulatory practice group remains available to assist you in assessing the possible consequences of the Parliament position and getting your voice heard in the informal consultation.