The largest artificial intelligence businesses in the world are pressuring the UK government to expedite the safety testing of AI systems in an effort to position the country as a leader in the regulation of this rapidly evolving technology.
A number of tech companies, including Microsoft, OpenAI, Google DeepMind, and Meta, voluntarily committed in November to allow Britain’s new AI Safety Institute to assess their most recent generative AI models. The corporations promised at the time that if the institution discovered problems with the technology, they would modify their models.
Numerous individuals acquainted with the procedure claim that the AI organizations are trying to get clarification about the testing the AISI is carrying out, their duration, and the procedure for providing feedback in the event that any hazards are discovered.
According to people close to the IT businesses, the results of AISI’s safety testing did not legally require them to alter or postpone the distribution of their products.
On Monday, Ian Hogarth, the chair of AISI, stated on LinkedIn that the AI Safety Institute is implementing the idea that governments ought to verify their models prior to release, as agreed upon by businesses.
“Testing of models is already under way working closely with developers,” the UK government told the Financial Times. “We welcome ongoing access to the most capable AI models for pre-deployment testing — one of the key agreements companies signed up to at the AI Safety Summit,” which took place in November in Bletchley Park.
“We will share findings with developers as appropriate. However, where risks are found, we would expect them to take any relevant action ahead of launching.”
The discussion with tech businesses highlights the drawbacks of establishing the boundaries of rapidly advancing technology through voluntary agreements. The government laid out the conditions for “future binding requirements” on Tuesday, emphasizing that top AI developers must be held responsible for maintaining system security.
Prime Minister Rishi Sunak wants the UK to play a major role in addressing the existential threats associated with the rise of AI, such as the technology’s use in damaging cyberattacks or the development of bioweapons. The government-backed AI safety institute is essential to this goal.
Those with intimate knowledge of the situation claim that the AISI has started testing AI models that are already available and has access to models that are not yet public, like as Google’s Gemini Ultra.
According to one source, testing has concentrated on the dangers of AI misuse, particularly those related to cyber security, and has benefited from the knowledge of the Government Communications Headquarters’ (GCHQ) National Cyber Security Centre.
According to recently disclosed government contracts, the AISI has invested £1 million in acquiring the capacity to test for “jailbreaking,” which refers to creating prompts that trick AI chatbots into evading their security measures, and “spear-phishing,” which is the practice of targeting people and organizations—typically through email—with the intent of stealing confidential data or disseminating malware.
Another contract is for the creation of “reverse engineering automation,” which is the automated process of dissecting source code to determine its operation, organization, and design.
“The UK AI Safety Institute has access to some of our most capable models for research and safety purposes to build expertise and capability for the long term,” Google DeepMind said.
“We value our collaboration with the institute and are actively working together to build more robust evaluations for AI models, as well as seek consensus on best practices as the sector advances.”