Agentic AI, quantum represent extraordinary moment in tech, says Singapore’s Digital Minister
By Amit Roy Choudhury
Speaking on the second day of SICW 2025, Minister Josephine Teo shared steps being taken by Singapore to foster private sector and international cooperation to develop protocols to govern these technologies.

Talking about agentic AI and quantum computing, Singapore’s Minister for Digital Development and Information, Josephine Teo, said it will take collective will, wisdom and action to govern these technologies before they govern us. Image: MDDI.
Welcoming delegates on the second day of Singapore International Cyber Week 2025 (SICW 2025), Singapore’s Minister for Digital Development and Information, Josephine Teo, said the world was passing through “an extraordinary moment in technology” with agentic artificial intelligence (AI) and quantum computing “reshaping the world right before our eyes”.
Minister Teo, who is also Singapore’s Minister-in-charge of Smart Nation and Cybersecurity, added that both technologies offer tremendous promise, but they also pose serious risks.
As a result, these technologies demand a shift from reactive regulation to proactive preparation “when their implications cannot be fully predicted”, she said.
Minister Teo noted that “it will take collective will, wisdom and action to govern these technologies before they govern us”.
On agentic AI, Singapore was taking proactive steps to address the challenges internationally.
The Minister announced that the Cyber Security Agency of Singapore (CSA) will release, for public consultation, a document on securing agentic AI as a part of the government’s effort to proactively address the challenges.
This document will be an addendum to its CSA’s Guidelines and Companion Guide on Securing AI Systems, to cover the unique risks of agentic AI systems.
“It is also an invitation – to governments, researchers, and industry partners – to help shape a global reference for securing agentic AI,” Minister Teo said.
Quantum computing
For quantum computing, CSA has launched two resources for public consultation.
They were the Quantum Readiness Index, which was a self-assessment tool that helped organisations understand their current preparedness for quantum threats to encryption, and chart their migration journey towards quantum-safe systems.
The second was the Quantum-Safe Handbook, which guided organisations, particularly Critical Information Infrastructure (CII) owners and government agencies, to prepare themselves for the transition to quantum-safe cryptography.
This handbook was jointly developed by CSA, GovTech Singapore, and the Infocomm Media Development Authority of Singapore (IMDA), in collaboration with leading technology companies, cybersecurity consultancies, and professional associations.
“We consider these resources to be living MVP (minimum viable product) documents that get improved through public feedback. And we welcome you to contribute so we can all learn together,” the Minister said.
She also announced that CSA would be signing memoranda of cooperation with major technology companies, including Google, AWS, and TRM Labs, to enhance AI-driven intelligence sharing on cyber threats and enable joint operations against malicious activities.
The Minister noted that Singapore’s partnership with Google “demonstrates the tangible benefits” of partnering with the private sector.
She highlighted the Enhanced Fraud Protection feature within Google Play Protect, which has blocked 2.78 million malicious app installations across 622,000 devices in Singapore as of September 2025.
Safe and responsible agentic AI
After her speech, speakers at a high-powered panel, Infinite Actions, Finite Control – Securing Agentic AI, noted that the safe and responsible adoption of agentic AI required a multifaceted approach.
There was a need to build strong governance, an ongoing modular validation process, and collaborative development of standards and protocols.
By encouraging shared learning and implementation of sector-specific safeguards, both organisations and regulators could incrementally build confidence in the technology, the speakers noted.
Moderating the discussion, CSA’s Director (Strategy & Planning), Gayle Goh, highlighted the importance of building trust and confidence in agentic AI through collaboration.
The successful adoption of the technology also hinged on organisations fostering open communication among all stakeholders, said Goh, highlighting the importance of continuous education to ensure that society could effectively navigate the opportunities and risks posed by agentic AI.
Robust governance a must
Echoing the points raised by Goh, the UK’s Department for Science and Technology’s Deputy Director for Cyber Security Innovation and Skills, Andrew Elliot, emphasised the critical role of robust governance, clear lines of responsibility, and adherence to established security best practices to tackle challenges posed by agentic AI.
He advocated for the development and adoption of standardised codes and protocols, rooted in evidence and forged through international collaboration, to ensure organisations could effectively manage accountability and incident response.
To subscribe to the GovInsider bulletin, click here.
Also emphasising standardisation, Google’s VP Security Engineering, Royal Hansen, supported guardrails as enablers for safe and incremental adoption of agentic AI across diverse sectors.
These guardrails could include human oversight and a log of activities, Hansen said, advocating the need for sector-specific protocols and close cooperation between technologists and domain experts.
Hansen also raised the prospect of using agentic AI to bolster cyber defences with systems that could autonomously detect and respond to threats.
Bringing in a different perspective to the discussion, AI assurance provider Resaro’s Co-CEO, April Chin, explained that organisations struggle to transition agentic AI from proof of concepts (POCs) to full production due to key bottlenecks at the validation and trust-building stage.
Likening agentic AI to an energetic but unpredictable intern, MDDI’s Chief AI Officer and Deputy Government Chief Digital Technology Officer, He Ruimin, underscored the need for organisations to carefully calibrate permissions, provide appropriate oversight, and integrate agentic systems into comprehensive governance structures.
Ruimin noted that as more traditional software becomes agentic, clear distinctions were fading, necessitating broad risk evaluation and adaptive frameworks that focus on permissions and system-level interactions rather than rigid categories.
Using agentic AI for cybersecurity
During another panel discussion, Survival of the Smartest - Agentic AI in Cybersecurity Operations, speakers elaborated on the use of agentic AI to bolster cyber defences.
Moderating the panel, Booz Allen Hamilton’s Senior Executive Advisor, Stephen Fogarty, noted that agentic AI had a “transformative potential” in cybersecurity amid an increasingly complex and threatening digital environment, where traditional approaches were insufficient to address new challenges.
He stressed that organisations need to rapidly but safely harness agentic AI, balancing innovative adoption with the need for risk management, particularly by maintaining human involvement in critical decisions.
The panel agreed that while agentic AI represented a pivotal advance in cybersecurity, it needed to be adopted with caution, responsibility and strategic foresight.
The speakers concluded that a thoughtful, adaptive approach was needed so that organisations could harness the full potential of agentic AI while minimising risks in a complex digital landscape.