AI developers consent to testing new models with the government before releasing them

BLETCHLEY PARK, England: In a potentially historic decision made at the UK’s artificial intelligence conference, leading AI developers decided to collaborate with governments to test new frontier models before to their release in order to help control the hazards of the quickly advancing technology.

Governments and organizations are racing to create regulations and protections because some tech and political experts have warned that if AI is not managed, it might destroy consumer privacy, endanger human safety, and bring about a worldwide disaster.

At the first-ever AI Safety Summit held on Wednesday at Bletchley Park, the site of Britain’s World War II code-breakers, political leaders from the US, EU, and China decided to collaborate on defining risks and developing strategies to reduce them.

The United States, the European Union, and other “like-minded” nations have also reached an agreement with a small number of cutting-edge AI businesses, according to British Prime Minister Rishi Sunak, on the idea that models should be thoroughly evaluated both before and after they are implemented.

Known as the “Godfather of AI,” Yoshua Bengio will assist in presenting a “State of the Science” report to foster a common knowledge of the opportunities and threats that lie ahead.

“Until now the only people testing the safety of new AI models have been the very companies developing it,” Sunak said in a statement. “We shouldn’t rely on them to mark their own homework, as many of them agree.”


About a hundred politicians, academics, and tech executives have gathered at the summit to discuss how to go ahead with a technology that has the potential to completely change how businesses, society, and economies function. Some of them are even aiming to create an autonomous organization that would oversee the whole world.

A Chinese vice minister attended the meeting on Wednesday with other political officials, marking a first for Western attempts to oversee the safe development of AI. The event’s emphasis was on very powerful general-purpose models known as “frontier AI”.

Vice Minister of Science and Technology Wu Zhaohui signed a “Bletchley Declaration” on Wednesday, but China did not sign the testing agreement on Thursday as it was not there.

After several Western countries cut down on their technology collaboration with Beijing, Sunak faced criticism from certain MPs in his own party for having invited China. However, Sunak maintained that the major stakeholders in AI safety had to be included in any attempt to ensure AI remains safe.

He said that it demonstrated the potential for Britain to play in uniting the three major economic blocs—the United States, China, and the European Union.

At a news conference, Sunak said, “It speaks to our ability to convene people, to bring them together.” “It wasn’t an easy decision to invite China, and lots of people criticised me for it, but I think it was the right long-term decision.”

At the conference on Thursday, leaders such as U.S. Vice President Kamala Harris, U.N. Secretary-General António Guterres, European Commission President Ursula von der Leyen, and representatives of Microsoft-backed OpenAI, Anthropic, Google DeepMind, Microsoft, Meta, and xAI attended sessions.

Complex algorithms could never be thoroughly tested, according to EU’s von der Leyen, therefore “above all else, we must make sure that developers act swiftly when problems occur, both before and after their models are put on the market.”

The billionaire businessman Elon Musk and Sunak will have their last discussion on AI throughout the two days. It will air later on Thursday on Musk’s X, the website that was once known as Twitter.

Two insiders at the meeting said that on Wednesday, Musk advised other delegates not to hurry into passing AI laws.

Rather, he said that businesses using the technology were in a better position to identify issues and could report back to the legislators who were drafting new legislation.

Musk told reporters on Wednesday, “I don’t know what the fair rules are necessarily, but you’ve got to start with insight before you do oversight.”

Related Articles

Back to top button