AI an inflection point in public sector cybersecurity

Oleh Elastic

The question for government organisations is no longer whether to embrace AI for security, but how fast and targeted to implement it to stay ahead of cybercriminals.

As threats move at machine speed and timelines become even shorter, the tools that enable more sophisticated threats may also be the ones needed to enhance security. Image: Canva.

Cyber attackers and defenders alike are racing to exploit artificial intelligence (AI) capabilities. AI-powered phishing and scam campaigns are increasingly personalised, targeted and cheaper to deploy.

 

In mid-April, Singapore’s Cyber Security Agency (CSA) issued advisory AD-2026-004. The advisory warned that a new wave of frontier AI models could reduce the time required to identify vulnerabilities and engineer exploits in software, shortening the process from months to hours.

AI-powered security threats are a global reality for the public sector.

 

The United States (US), for example, published its National Cyber Strategy in March 2026, reflecting a clear directive for governments around the world to lean heavily into AI-driven cybersecurity amid a fast-evolving threat landscape.

 

As threats move at machine speed and timelines become even shorter, the tools that enable more sophisticated threats may also be the ones needed to enhance security.

AI as the defender’s new leverage

 

At Elastic(ON) Singapore 2026, leaders from Singapore’s organisations shared insights on how to leverage technology to sharpen security capabilities in this digital era.

 

The same properties that make AI useful to attackers, such as speed, scale and ability to process great volumes of information, are equally valuable on the defensive side, notes Centre for Strategic Infocomm Technologies (CSIT) Group Director, Desmond Loh.

 

Loh was a judge for the 2026 edition of the Elastic Forge the Future hackathon, where participants designed agentic AI solutions for real-world impact.

 

Loh says that AI is helping analysts identify real threats, filter noise, and target response efforts more effectively. He points to the signal-to-noise problem in cybersecurity, which refers to when security teams field thousands of alerts daily, most of which are false positives.

 

This is where AI can make a difference, not by replacing human judgment but by freeing it up more time for informed decisions, notes Loh. With AI filtering out false positives, security teams get more time target remediation efforts more precisely.

 

Agencies around the world are embracing this approach. Those using AI-driven security platforms have reported reclaiming up to 74 per cent of full-time security employees, returning hours for more strategic work.

 

The goal is not automation for its own sake, but giving trained people better leverage, adds Loh.

 

The sentiment is shared by Elastic’s Chief Information Security Officer, Mandy Andress. She views antifragility, where systems improve and thrive despite disruption, as a concept pertinent to cybersecurity today.

 

Andress believes the increasingly high stakes – organisations’ reputation and trust on the line, and timelines from infiltration to exfiltration getting shorter – will catalyse a shift from reactive to proactive cybersecurity practices.

 

AI will play a critical role enabling defenders to respond in seconds and address the threats that matter from a sea of alerts, she notes.

Defenders must be right every time

 

According to Loh, an asymmetry remains at the core of cyber defence that AI does not resolve; attackers only need to be right once, but defenders need to be right every time.

 

What AI changes is the economics of defence, giving under-resourced public sector teams tools that have been previously only available to well-funded adversaries.

 

This is critical for a sector that handles enormous amounts of sensitive data to run services that millions of people rely on daily, from digital taxes and healthcare records to social support and document renewal.

 

The organisations that will navigate this asymmetry well are those that acknowledge both truths simultaneously.

 

That means treating AI literacy as a professional standard, protecting public data with an institutionalised culture of responsible AI use, and building the knowledge to know when to trust their tools and when to override them.

 

How governments invest in the people and tools to stay ahead of smarter threats will shape their security posture, and the kind of digital services that their citizens can trust.

Building capable cybersecurity talents

 

Loh highlights the need to move from AI-aware to AI-capable.

 

This is because over-reliance on tools without understanding them creates its own set of vulnerabilities.

 

What is needed is a comprehensive understanding of AI’s limits, its failure modes, how it works, and why it works, says Loh.

 

The US cyber strategy's sixth pillar on Building Talent and Capacity prioritises education and training in cyber technologies, reflecting a shared recognition across governments that the human layer is just as important as the technical one.

 

Loh notes that Singapore is making meaningful progress on that front, with digital literacy programmes, upskilling opportunities, and hackathons that help to enhance cybersecurity talents and build working knowledge.

 

Watch the interview here.