On 14 June 2023, the European Parliament (“EU Parliament”) published its final position on the Artificial Intelligence Act (“AI Act”). Thus, after the initial draft of the European Commission (“EU Commission”) and the position of the Council of the European Union (“Council”), all three drafts are now available and the way is clear for the trilogue. The first trilogue meeting took place on the same day as the parliamentary vote – apparently all parties involved want to reach a common final position as soon as possible.
What has happened so far
On 21 April 2021, the EU Commission published its initial draft of the AI Act, which contains harmonized rules for developing, marketing and using AI systems, and aims to strengthen society’s trust in AI systems without blocking the opportunities opened up by this technology. To this end, AI systems have been classified into four risk categories following a risk-based approach: unacceptable, high, low and minimal. Systems with high risk are subject to comprehensive regulation in particular. AI systems with unacceptable risk, such as systems that distort human behavior or assess the trustworthiness of persons based on their social behavior (so-called social scoring), will be banned. High-risk AI systems, and systems that make decisions about persons in areas sensitive to fundamental rights, must meet strict requirements for use. On the other hand, AI with low and minimal risk, such as chatbots or spam filters, will remain largely unregulated in order to maintain competitiveness in the EU.
There have been several developments since the EU Commission’s first draft. On 6 December, the Council adopted its common position. Its proposed amendments include a more precise definition of an AI system in order to obtain sufficiently clear criteria for distinguishing AI from simpler software systems. Furthermore, an expansion of the scope of prohibited AI practices was proposed: The ban on the use of AI for social scoring should also apply to private actors. Moreover, according to the Council’s proposal, AI systems that are not likely to cause serious violations of fundamental rights or other significant risks will not be classified as high-risk systems. For small and medium-sized enterprises (so-called SMEs), the upper limit on fines has been halved. In addition, the Council revised the conformity assessment procedure and the provisions on market surveillance to allow for more efficient and easier implementation.
Position of the EU Parliament
The EU Parliament has now also adopted its own proposal with numerous suggestions for improving the Commission’s draft. The most important changes will be discussed in more detail below.
Changes in definitions
The EU Parliament has revised the definitions in many respects. The central point is the new definition of the term “artificial intelligence system” and thus a change in the scope of the AI Act. An AI system is now defined as a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs, such as predictions, recommendations, or decisions, that influence physical or virtual environments. In this way, parliamentarians have largely adopted the OECD’s definition. Even though this definition is probably already an established definition of an international body, it has been criticized for being too broad, so that technically simple devices – such as smart home devices or “normal” software – could also fall under the term.
Besides that, there are other interesting changes in definitions. For example, in order to better differentiate between the companies that use AI and those end users whose rights are affected by AI, “users” have been renamed “deployers”. Furthermore, the term “affected person” has been introduced. This covers natural persons or groups of persons who are subject to or otherwise affected by an AI system.
More prohibitions and changes for high-risk systems
Further important changes can be found regarding prohibited and high-risk AI systems. Still banned under the parliamentary draft are systems for the subliminal distortion of human behavior, exploitation of a person’s vulnerabilities, and social scoring. Additionally, however, there are a number of other AI systems, including AI applications for predictive policing and risk assessment tools, biometric categorization systems, facial recognition databases created by scraping social media or surveillance cameras, and emotion recognition systems in certain areas such as law enforcement. The ban on biometric identification systems has also been extended.
The scope of systems classified as high-risk has changed somewhat less. One notable addition could have great practical relevance, because recommendation services of very large online platforms (so-called VLOPs) are also classified as high-risk. AI systems that influence elections or voting behavior are also to be classified as high-risk. Another important addition is that a second level of classification has been introduced for high-risk AI systems. AI systems will only be considered to be high-risk systems if they pose a significant risk to the health, safety or fundamental rights of natural persons. In some cases, a significant risk of damage to the environment can also lead to a high-risk classification. In this context, the Commission – after consultation with the so-called “AI Office” of the EU and relevant stakeholders – will provide guidelines specifying the circumstances under which such a significant risk exists.
Comprehensive regulation of generative AI
Important changes can also be found regarding foundation models and so-called generative AI – the main area of recent public discussion about AI since the release of ChatGPT at the end of November last year. In the EU Commission’s draft initiative these AI systems were not regulated separately, nor were they adequately addressed in the Council’s draft, which was completed before the hype surrounding ChatGPT. However, rapid developments involving this type of AI application and subsequent media discussions have quickly made clear that there is a need for legislative action here.
The EU Parliament lays down various requirements for providers of foundation models, i.e., “AI models that are trained on broad data at scale, are designed for generality of output and can be adapted to a wide range of distinctive tasks”. These include establishing a risk management system, using appropriate datasets, providing assurances of adequate quality (performance, predictability, safety, etc.) through appropriate measures, compliance with energy efficiency standards, producing adequate technical documentation and instructions for use, establishing a quality management system, and registering the foundation model.
Additional obligations are set out for providers of generative AI, i.e., “foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex texts, images, audio or video”.
They must comply with transparency obligations, ensure adequate safeguards against the generation of content in breach of EU law, use state of the art technology and not prejudice fundamental rights such as freedom of expression, and publicly document a summary of the use of training data protected under copyright law.
Lower fines, but more rights for affected persons
Also noteworthy are the adjustments regarding fines in the event of a violation of the provisions of the AI Act. The EU Parliament has significantly reduced the fines. In the case of violations against placing prohibited AI systems on the market alone, the maximum fine had been set at up to 30 million to 40 million euros (or 7% of the worldwide annual turnover) – more than in the Commission’s and Council’s draft. The caps for SMEs have been removed, but size and market share will be taken into account when assessing fines. However, the reduced fines in the parliamentary draft are accompanied by a strengthening of the rights of affected persons. It provides for additional rights for persons to file complaints about AI systems and to receive explanations about decisions made by high-risk AI systems that have a significant impact on them.
Outlook
Although the core of AI Act has not been questioned by the Council and the EU Parliament, such as the risk-based approach envisaged by the EU Commission, the drafts differ considerably in detail. It will not be an easy task at all to reconcile the different ideas of the definition of AI, and thus the scope of the AI Act. The ban on biometric identification systems and possible exceptions will also be a major point of contention, as the numerous amendments on the day of the vote and the preceding disputes in the EU Parliament, which were broadcast in the media, suggest. Whether the EU Parliament’s line will ultimately prevail remains to be seen. In any case, despite the great time pressure they have set themselves it is to be hoped that the three institutions will succeed in reaching a sensible compromise, which will protect European values against the uncontrolled use of AI and at the same time leave companies sufficient room for innovation.
