X

A State-Backed AI Safety Tool Is Unveiled in the UK

For artificial intelligence (AI) safety testing, the United Kingdom has unveiled what it refers to as a groundbreaking toolbox.

The novel product, named “Inspect,” was unveiled on Friday, May 10, by the nation’s AI Safety Institute. It is a software library that enables testers, including international governments, startups, academics, and AI developers, to evaluate particular AI models’ capabilities and then assign a score based on their findings.

As per the news release from the institute, Inspect is the first AI safety testing platform that is supervised by a government-backed organization and made available for public usage.

As part of the ongoing efforts by the United Kingdom to lead the field in AI safety, Michelle Donelan, the secretary of state for science, innovation, and technology, announced that the AI Safety Institute’s testing platform, named Inspect, is now open sourced.

This solidifies the United Kingdom’s leadership position in this field and places British inventiveness at the center of the worldwide push to make AI safe.

Less than a month has passed since the US and UK governments agreed to cooperate on testing the most cutting-edge AI models as part of a joint effort to build safe AI.

“AI continues to develop rapidly, and both governments recognize the need to act now to ensure a shared approach to AI safety which can keep pace with the technology’s emerging risks,” the U.S. Department of Commerce said at the time.

The two governments also decided to “tap into a collective pool of expertise by exploring personnel exchanges” between their organizations and to establish alliances with other countries to promote AI safety globally. They also intended to conduct at least one joint test on a publicly accessible model.

The partnership follows commitments made at the AI Safety Summit in November of last year, where world leaders explored the need for global cooperation in combating the potential risks associated with AI technology.

“This new partnership will mean a lot more responsibility being put on companies to ensure their products are safe, trustworthy and ethical,” AI ethics evangelist Andrew Pery of global intelligent automation company ABBYY told PYMNTS soon after the collaboration was announced.

In order to obtain a competitive edge, creators of disruptive technologies often release their products with a “ship first, fix later” mindset. For instance, despite ChatGPT’s negative effects, OpenAI distributed it for widespread commercial use despite being reasonably open about its possible risks.

Categories: Technology
Kajal Chavan:
X

Headline

You can control the ways in which we improve and personalize your experience. Please choose whether you wish to allow the following:

Privacy Settings

All rights received