Singapore and UK governments partner on AI safety efforts
Oleh Si Ying Thian
The extended collaboration between the governments will include AI safety research, global norms and standards setting, as well as safety testing frameworks for AI life cycle.
Singapore's Minister for Digital Development and Information, Josephine Teo, and the UK's Secretary of State for Science, Innovation and Technology, Peter Kyle, signed an agreement to advance AI safety efforts. Image: Ministry of Digital Development and Information (MDDI)
The Singapore public service has seen more than 65,000 officers using a secure version of a generative artificial intelligence (GenAI) tool in their daily tasks, as well as several thousands of purpose-built AI bots, according to Singapore’s Minister for Digital Development and Information, Josephine Teo.
She emphasised that policymakers need to have a good grasp of AI use and be adept in setting up guardrails in its use.
“We made it clear to government officials that they are welcome to experiment, provided they take ownership and accountability of the results,” she explained.
Minister Teo was speaking at the Future of AI Summit organised by the Financial Times held in London, UK, on November 6.
Her working visit saw the signing of a Memorandum of Cooperation (MoC) with the UK’s Secretary of State for Science, Innovation and Technology, Peter Kyle, to advance AI safety collaborations between Singapore and the UK.
This agreement will build on existing collaborations in AI safety between both governments. During the last AI Safety Summit in November 2023, both countries committed to aligning with the international network of AI Safety Institutes (AISIs) on their work on research, standards and testing.
AISIs is a network that currently comprises 10 countries and the European Union with the goal of advancing the science of AI safety.
Minister Teo and Secretary Kyle shared that the collaboration reinforces their support for AISIs.
Singapore and the UK also previously signed two agreements to advance the use of data and new technologies like AI by their governments, through research and regulatory cooperation.
In contrast, the new agreement more explicitly prioritises collaboration on AI safety matters.
To subscribe to the GovInsider bulletin click here.
AI safety, rather than use, will be the focus
The new agreement identified four key areas of collaboration.
These include research to develop safer AI systems and risk management; global norms setting on international AI safety standards and protocols; knowledge exchange between the AI safety institutes on trustworthy and safe AI systems for global use; as well as safety testing frameworks that provide robust evaluations throughout the AI product lifecycle.
The agreement will focus on the AI safety work between the regulating agencies, namely Singapore’s Infocomm Media Development Authority (IMDA) and UK Department of Science, Technology, as well as designated AI safety institutes in Singapore and the UK.
The designated AI safety institutes are the Digital Trust Centre (DTC) at the Nanyang Technological University and the UK AI Safety Institute.
AI safety institutes offer technical expertise and resources to governments to understand and mitigate the risks of AI, and are usually publicly funded and state backed.
Unlike AI ethics, which focuses more on the responsible design and implementation of AI systems, AI safety refers to a scientific field of research focused on evaluating, preventing, and mitigating risks from advanced AI systems, according to the Bletchley Declaration.
The International Center for Future Generations (ICFG) recently covered the progress of the first wave of AI safety institutes set up around the world.
To subscribe to the GovInsider bulletin click here.
Singapore’s regional approach: AI safety beyond US and China
“Global discussions take a lot longer, so we have to move when we are able to, and hopefully what we do can serve as a useful reference point,” said Minister Teo.
On where Singapore stands on AI safety between the tech giants of US and China, she shared that while there are discussions with both, Singapore also has the opportunity to engage with “a large community of interested stakeholders.”
She cited examples of Singapore’s involvement in the AI Playbook for Small States, highlighting the best practices from small states on how they have implemented AI strategies and policies in their countries.
The playbook was developed by IMDA and Rwanda’s Ministry of Information Communication Technology (ICT) and Innovation, in consultation with Digital Forum of Small States (Digital FOSS) members.
She highlighted the common synergies shared by this group of small states, including Singapore: “I should say that the digital ministers understand each other's priorities, and also have commitment building on what we have been able to do together in cyber security to advance on AI development.”
In Southeast Asia, the Association of Southeast Asian Nations (ASEAN) has in place a digital economy agreement and framework that is “in quite advanced discussion.”
She added that ASEAN has also established a data management framework, and is working towards strengthening cooperation around AI.
“At the ASEAN level, [we] have an agreement on what kinds of AI deployment should be encouraged in order to meet ethical standards,” she shared.