The first comprehensive artificial intelligence regulations in history were reached by European Union negotiators on Friday, opening the door for legal supervision of this technology that has the potential to revolutionize daily life and inspire fears of existential threats to humankind.
A tentative political agreement for the Artificial Intelligence Act was signed by negotiators from the European Parliament and the bloc’s 27 member nations, despite significant disagreements on contentious issues such as police use of facial recognition surveillance and generative AI.
Just before midnight, European Commissioner Thierry Breton tweeted, “Deal!” “The EU is the first continent to establish explicit guidelines for the application of AI.”
This week’s protracted closed-door negotiations yielded the outcome; the first round lasted 22 hours, and the second round began on Friday morning.
The pressure was on officials to win support for the flagship legislation politically. However, civil society organizations met it with a cold reception while they awaited the resolution of technical issues that will need to be resolved in the upcoming weeks. They claimed that not enough was done to shield humans from the dangers of artificial intelligence.
Daniel Friedlaender, head of the European office of the Computer and Communications Industry Association, a lobby group for the tech industry, said that “today’s political deal marks the beginning of important and necessary technical work on crucial details of the AI Act, which are still missing.”
When the EU released the first draft of its rulebook in 2021, it jumped ahead of everyone else in the world in the race to develop AI safeguards. However, the recent surge in generative AI has forced European officials to hurriedly update a proposal that was positioned to become a global model.
Brando Benifei, an Italian lawmaker who is co-leading the body’s negotiation efforts, told The Associated Press late Friday that while the European Parliament will still need to vote on the act early in the next year, that is now merely a formality because the deal has been reached.
When asked if it had everything he wanted, he replied via text, “It’s very very good.” “Overall, very good, but obviously we had to accept some compromises.” The proposed law, which would not go into full force until 2025 at the latest, would impose severe fines for infractions of up to 35 million euros ($38 million), or 7% of a company’s worldwide sales.
The ability of generative AI systems, such as OpenAI’s ChatGPT, to produce text, photos, and music that resemble human speech has taken the world by storm. However, concerns have been raised about the risks that this quickly advancing technology poses to jobs, privacy, copyright protection, and even human life itself.
Though they’re still catching up to Europe, the United States, United Kingdom, China, and international coalitions like the Group of Seven major democracies have now jumped in with their own proposals to regulate AI.
Strong and comprehensive rules from the EU “can set a powerful example for many governments considering regulation,” said Anu Bradford, a Columbia Law School professor who’s an expert on EU law and digital regulation. Other countries “may not copy every provision but will likely emulate many aspects of it.”
According to her, AI businesses that must abide by EU regulations will probably carry some of those duties outside of the EU. “After all, it is not efficient to re-train separate models for different markets,” she said.
The original intent of the AI Act was to reduce the risks associated with particular AI functions according to a risk scale that ranged from low to unacceptable. Legislators, however, pushed for its expansion to include foundation models—the sophisticated systems that serve as the basis for general-purpose AI services like ChatGPT and Google’s Bard chatbot.
These systems, also referred to as large language models, are trained using enormous collections of text and photos that are taken directly from the internet. Unlike traditional AI, which processes data and performs tasks according to preset rules, they enable generative AI systems to produce something original.
The businesses creating foundation models will need to create technical documentation, adhere to EU copyright regulations, and specify the training materials. Extra attention will be paid to the most sophisticated foundation models that present “systemic risks.” This will include evaluating and reducing those risks, disclosing significant events, implementing cybersecurity safeguards, and disclosing their energy efficiency.