The UK’s AI Safety Summit takes cutting-edge AI as seriously as the climate crisis

By Yogesh Hirdaramani

At the Global AI Safety Summit held in the United Kingdom last week, leaders from 28 countries and the European Union agreed to collaborate in identifying AI safety risks and implementing risk-based policies to mitigate the effects of advanced AI models.

Global leaders gathered at the AI Safety Summit to discuss the risks posed by AI. Image: UK Government via Facebook

Leaders from 28 countries, including the United States, China, Singapore, and the Philippines, and the European Union have signed an agreement to cooperate on monitoring and mitigating the risks of “frontier AI”, or advanced AI models with the potential for “serious, even catastrophic, harm”.

 

“A serious strategy for AI safety has to begin with engaging all the world’s leading AI powers,  and all of them have signed the Bletchley Park communiqué,” said Rishi Sunak, Prime Minister of the United Kingdom, at the AI Safety Summit held on 2 November.

Commentators have lauded China’s participation in the Summit, which was hailed as the event's “biggest achievement” by John Tasioulas, Director of the Institute for Ethics in AI, University of Oxford in a release.

 

Building on the above agreement, 10 countries, including Singapore, signed an AI safety testing plan to collaborate on testing the next generation of AI models against critical risks along with leading AI companies.

 

“Until now, the only people testing the safety of new AI models have been the very companies developing it. That must change,” said PM Sunak, pointing to the need for governments to step up.

 

The two-day summit aimed to evaluate the risks posed by frontier AI models and explore what national policymakers and other stakeholders can do to better mitigate these risks and harness AI for the public good. Here are some of the key summit outcomes.

State of Science report emulates the Intergovernmental Panel on Climate Change reports

 

As part of the summit, leaders agreed to set up a panel to publish an AI “State of Science” report, modelled after the Intergovernmental Panel on Climate Change (IPCC), which regularly produces reports that review the state of climate change research. 

 

Yoshua Bengio, a Canadian computer scientist and university professor known as a “Godfather of AI”, will oversee the inaugural report. The report aims to assess and summarise existing research on the risks and capabilities of frontier AI and provide recommendations for further AI safety research. It will be published before the next AI Safety Summit held in Korea next year.

 

PM Rishi Sunak had earlier urged governments to treat AI with the same urgency as climate change, calling for stronger multilateral cooperation on the matter in June this year. 

 

The sentiment was echoed by Demis Hassabis, chief of DeepMind, Google’s AI subsidiary, who called for governments to draw inspiration from the IPCC reports in the lead-up to the AI Safety Summit.

New AI Safety Institutes, from the UK and the US

 

Both the UK and the US announced investments in new AI Safety Institutes dedicated to identifying, testing, and mitigating AI risks.

 

The United Kingdom’s new AI Safety Institute aims to test new types of frontier AI, before and after release, against risks ranging from social harms, like bias and misinformation, to extreme risks such as humanity losing control of AI. 

 

Singapore’s Infocomm Media Development Authority has partnered with the new AI Safety Institute to build capabilities and tools for evaluating frontier AI models, said Singapore Minister for Communications and Information Josephine Teo, who attended the event alongside Prime Minister Lee Hsien Loong.

 

The agency is currently spearheading the AI Verify initiative, an AI governance testing framework and software toolkit that businesses can use to voluntarily test their AI solutions, as GovInsider has reported.

 

A week earlier, the US launched its own AI Safety Institute to evaluate the safety of emerging AI models. In addition, President Joe Biden has signed an executive order compelling developers of AI systems with implications on US national security, economy, health, or safety to share AI testing results with the government before release.

 

In her remarks at the AI Safety Summit, Vice President Kamala Harris said that the US will not only monitor AI-enabled threats facing humanity as a whole, but also threats facing individuals and communities, such as misinformation and AI bias.

Critics question focus on existential threats

 

Critics have called into question the Summit’s focus on the existential threats posed by AI, such as the use of AI to create more advanced bioweapons and cybersecurity threats, or the emergence of AI that can evade human control entirely.

 

In an open letter by organisations such as Mozilla, Trades Union Congress, and the Open Data Institute, signatories argued that the Summit overlooks the immediate risks of AI that ordinary individuals face, such as predictive policing and algorithmic injustice. 

 

“Frontier AI should not be used as an excuse to avoid regulating the well-established harms of today’s AI systems,” said Professor Brent Mittelstadt, Director of Research at the Oxford Internet Institute, in a release by Oxford University.

 

More than 100 organisations and individuals signed the letter which criticised the sidelining of civil society from the Summit and called for increased openness.

 

In a post on X, Yann LeCun, Vice President and Chief AI Scientist at Meta, stated that an excessive focus on the existential risks of AI could stymie open-source AI platforms and concentrate AI knowledge in the hands of big tech players.