UK pivots from preventive to defensive stance with rebranded AI institute
By Si Ying Thian
The renaming of the AI Safety Institute to AI Security Institute reflects a shift in focus from building reliable and ethical AI systems to fighting AI-based security challenges and crimes.
-1739940228472.jpg)
Following the rebranding from AI Safety Institute to Security Institute, a new criminal misuse team will be formed within the Institute to conduct research on AI-related crime and security issues in the UK. Image: GOV.UK
The world’s first publicly-backed AI safety institute was set up by the UK government in November 2023.
Less than one and a half years later, the institute is now being rebranded to the AI Security Institute.
The new branding was announced by the UK Secretary of State for Science, Innovation and Technology, Peter Kyle, at the Munich Security Conference on February 14.
What this means
The shift in focus for the rebranded Institute was highlighted in a statement issued by the UK government.
“[The Institute] will not focus on bias or freedom of speech, but on advancing our understanding of the most serious risks posed by the technology to build up a scientific basis of evidence which will help policymakers to keep the country safe as AI develops,” the statement said.
The risks posed by artificial intelligence (AI) included how the technology can be used to develop chemical and biological weapons, carry out cyber-attacks, and enable crimes such as fraud and child sexual abuse, the statement added.
To subscribe to the GovInsider bulletin click here.
How this affects the rest of world
The US and UK governments were previously some of the earliest and most vocal advocates of AI safety.
-1699260075739-1739939737342.jpg)
Image: UK Government via Facebook
Following the AI Action Summit in Paris, the UK and US refused to sign an international AI declaration which called for ethical, inclusive, transparent, and sustainable AI development.
The declaration has been endorsed by 60 nations including France, China, India and more.
The respective government spokespersons highlighted concerns citing concerns over national security, regulatory burdens, and sovereignty in AI policymaking.
The same summit saw US Vice President JD Vance calling out European nations for "excessive regulation" of the AI sector, which could diminish AI's potential to support industries. "The AI future is not going to be won by hand-wringing about safety," he added.
Earlier in January, the US government also signed an executive order to ramp up domestic AI infrastructure - one of its objectives to harness AI for national security.
The shifted focus from "safety" to "security" in the US and UK leaves us to wonder what would happen to the International Network of AI Safety Institutes (AISIs) established last November, which includes the US and UK; as well as the cross-border collaborations around AI safety.
For instance, last November, Singapore and the UK governments signed an agreement to strengthen AI safety efforts, which covered research and development (R&D), global norms setting and safety testing frameworks to evaluate AI safety across the product life cycle.
Why is this happening
The rebranding follows shortly after the UK government announced its new blueprint for AI, which aims to bump up AI use in public services and set up dedicated AI growth zones across the country.
Kyle said that the new focus of the Institute will help boost public confidence in AI and help leverage the tech for economic growth.
There was an emphasis on protecting citizens against the harms of AI. “… this renewed focus will ensure our citizens – and those of our allies - are protected from those who would look to use AI against our institutions, democratic values, and way of life,” he explained.
The UK government and Anthropic, which is the American AI firm which developed “Claude”, also just signed an agreement to explore the potential of advanced AI tools in public service delivery.
Who is involved
The rebranded Institute will partner with the Home Office and the Defence Science and Technology Laboratory (Dstl), which is the science and tech arm of the Ministry of Defence.
Other partners include the National Cyber Security Centre (NCSC) and the Laboratory for AI Security Research (LASR).
Following the rebranding, a new criminal misuse team will be formed within the Institute to conduct research on AI-related crime and security issues in the UK.
The official statement highlighted that one of the areas of focus for the new team is to explore methods to prevent abusers from leveraging AI to develop child sexual abuse images and carry out crimes.
Earlier this year, the UK government made it a crime to possess, create or distribute AI tools to generate sexual content targeting children.
What others are saying
As The Register puts it, “Enjoy your biased, racist AI but don't use it to commit acts of terror or sex crimes.”
The enterprise tech news publication highlighted a series of moves by Big Tech companies and other governments indicating a shift from preventive regulation to proscriptive regulation.
These moves include Meta dissolving its Responsible AI team in 2023, followed by Apple and Meta having declined to sign EU’s AI Pact last year, then the US government under the Trump administration having undone Biden’s AI safety orders recently.
Comments following the Institute’s official LinkedIn page also highlighted concerns around the limited scope of security, as commentators explained that AI safety covers a broader range of AI-related concerns including transparency, bias, data security, and responsible use.