Background
There has been a great deal of discussion about the planned AI Act of the European Union (EU). Since the publication of the initial Commission draft on 21 April 2021, the Council of the EU (Council, see here) and the European Parliament (Parliament, see here) have now also published their drafts with proposed amendments. Since 14 June 2023, negotiations have been ongoing in the trilogue between the European Commission (Commission), Council and Parliament. So, the AI Act is on the home stretch and artificial intelligence will soon be regulated in Europe – or so one would think.
However, it will not happen that soon. Even if the trilogue negotiations lead to a final version of the AI Act by the end of the year, most of the regulations will only apply two years after it enters into force. The Council even suggested that the AI Act should only apply three years after its entry into force. In any case, this leaves a considerable period of time without AI regulation. Since this scenario is likely to cause unease among many in view of the extremely fast and dynamic developments in the AI industry, the Commission is pushing for transitional measures.
AI Pact
One of the Commission’s transitional measures, in cooperation with AI companies, is the so-called “Artificial Intelligence Pact” (AI Pact). Google’s parent company Alphabet was the first company to declare its willingness to cooperate with the Commission in this regard, and the Meta Group, which owns the social media platforms Facebook, Instagram and the messenger app WhatsApp, among others, has already signaled its potential willingness to cooperate.
Within the framework of the AI Pact, which in the eyes of the Commission will function as a kind of transitional solution, players will voluntarily commit themselves to compliance with uniform rules of conduct. In this way, certain standards and rules will be established even before the AI Act enters into force, in order to avoid potential damage from AI systems, and at the same time prepare for future obligations arising from the AI Act. However, the details of the AI Pact’s content and the question of whether and which companies will still continue to participate in it are currently completely open.
The AI Pact and the AI Act are thus supposed to go hand in hand, but they should not be confused because, although similarly worded, they are two completely different regulatory measures. The AI Act – a proposed EU regulation – is a legislative project of the EU, which is directly and bindingly applicable in EU member states and must be followed by companies. The AI Pact was also initiated by the Commission and is intended to be based on the content of the regulations of the AI Act, for example on risk management or data quality, and thus create a smooth transition to future regulation. However, the implementation of the AI Pact regulations is based entirely on a voluntary commitment by the participating companies and is therefore ultimately voluntary and not legally enforceable.
The timetable for the AI Pact envisaged by the Commission could be described as quite ambitious. After the AI Pact was announced in May 2023 by the French EU Commissioner Thierry Breton and initial commitments were already made, the schedule for the subsequent summer months included contacts with the Transport, Telecommunications and Energy Council (TTE) – which is responsible among other things for the development of trans-European communication networks – as well as visits to San Francisco, Seoul and Tokyo.
In the third quarter of 2023, the Commission is aiming to intensify the development of the AI Pact. In this phase, industry will be informed about the possibility of an AI Pact, and possible areas where companies might enter into voluntary agreements will be identified. In the fourth quarter of 2023, in which the trilogue on the AI Act will also be concluded, the AI Pact will then be aligned with the AI Act in order to avoid any contradictions and regulatory gaps. Close monitoring is planned for the subsequent period in order to evaluate the effectiveness of the implementation of the AI Pact.
AI code of conduct
In addition to the AI Pact, the EU is also pursuing the development of a non-binding transatlantic code of conduct in cooperation with the USA (AI Code of Conduct). It will contain international standards on risk audits, transparency and other requirements to which companies can then subscribe. However, further details are not yet available. A draft of this project will be published in the third quarter of 2023 and – after an evaluation of feedback from companies – presented to the G7. A final version of the AI Code of Conduct is expected by the end of the year.
Looking across the ocean
The USA has already implemented a project comparable to the AI Pact. On 21 July 2023, seven leading AI companies gathered at the White House to announce their voluntary commitment regarding the use of AI technologies (see here). Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI made a series of commitments based on the three core principles of “safety, security, and trust” to create responsible and safe AI for American citizens. For example, for safe AI products, they will conduct pre-market safety reviews of their AI systems and share information on AI risks with industry, governments, civil society and academics. They are also committed to robust cybersecurity and other safeguards to protect proprietary and unpublished parameters of AI models, and to simplify their mechanisms for reporting AI risks and weaknesses discovered by third parties. To increase public trust in AI, companies will label AI-generated content, highlight appropriate uses of AI as well as its limitations, and conduct more research on the societal risks of AI. They also want to specifically develop AI systems to make a positive contribution to important societal challenges – for example, for more equality or to combat climate change.
Promising approaches?
The Commission’s plan to rely on the voluntary commitment of companies is not new. In the past, the EU has already proven that it is capable of getting large technology companies to engage in a certain degree of self-regulation through voluntary codes. For example, to combat the spread of so-called hate speech on the internet, in May 2016, the Commission agreed on a “Code of Conduct on Countering Illegal Hate Speech Online” with Facebook, Microsoft, X (formerly Twitter) and YouTube, which was joined by other major tech companies such as Instagram, Snapchat, Dailymotion and TikTok in the following years. As part of the “Strengthened Code of Practice on Disinformation” from 2022, 44 companies – including some big names in the technology industry – are currently committing to more transparency in political advertising, improved cooperation with so-called fact checkers and easier access to data for researchers (see here). However, X (formerly Twitter) recently dropped out again – which highlights the limitations of the approach.
The AI Pact will now join this collection of variously successful voluntary regulatory measures. Although in the long term there will be no way around legally enforceable rules, this attempt by the Commission to implement a transitional solution together with the business community until the AI Act enters into force is certainly sensible in order to guarantee a certain minimum level of regulation in advance. On the other hand, such a voluntary commitment, even if this is not obvious at first glance, will also be beneficial for the companies involved. This is because by implementing defined, voluntary standards, they may be able to avoid special national regulations and assert their interests to a greater degree than in a classic legislative procedure with significantly more actors. However, it seems difficult to imagine that their voluntary commitments to the AI Pact will go significantly beyond what the AI Act will prescribe.
So, while the basic idea of the AI Pact is to be supported, the timing of this project seems problematic. The Commission, the Council and the Parliament are currently in the midst of their trilogue negotiations and thus on the verge of an agreement on a final version of the AI Act. Parallel negotiations on the AI Pact with (so far only US) companies – which, despite good ambitions, will not completely ignore their own economic interests – could jeopardize the impartiality of the Commission in its negotiations with the Council and Parliament and thus possibly have a negative impact on the AI Act. Furthermore, it is not yet clear how the AI Act will be designed in detail. It is therefore advisable to refrain from prematurely formulating the AI Pact in order to avoid possible contradictions with the final version of the AI Act.
What does the AI Pact mean for the vast majority of companies? For them, the AI Pact should not change much at first. Either way, they should prepare for the requirements of the AI Act and adapt their processes as necessary, because there is no question that it is on the way.
