How AI is reshaping cybersecurity for Singapore’s public sector

By Fortinet

In an age of digital innovation and adoption, cybersecurity must evolve to become just as data-driven and adaptive as the threats themselves, notes Fortinet’s Country Head for Singapore and Brunei, Jess Ng.

Ng notes that Fortinet steps in to help the public sector embed security into the fabric of the digital ecosystem, by using a unified AI-driven platform that provides consistent protection and visibility. Image: Canva.

For a digitally mature country like Singapore, a strong cybersecurity framework is essential.

 

The country has embraced digital transformation to the point where most interactions between government and citizen are digital in nature.

 

Any disruption of the digital infrastructure can directly affect citizens.

 

“This [digital] maturity changes the equation in a few ways. The attack surface is now very broad, spanning cloud platforms, mobile applications, data-sharing platforms, critical operational technology (OT) systems, and a growing number of services enabled by artificial intelligence (AI),” explains Fortinet’s Country Head for Singapore and Brunei, Jess Ng.

 

The success of Singapore’s vision of harnessing digital technology to transform society depends on ensuring that government services are secure, reliable, and resilient against rising cyber threats.

 

Alongside this need for ensuring uptime and seamless services while strengthening cyber security efforts, public sector technology leaders are also facing pressures on budgets and productivity, adds Ng.

 

She notes that this is where Fortinet steps in to help the public sector embed security into the fabric of the digital ecosystem, by using a unified AI-driven platform that provides consistent protection and visibility.

 

“The goal is to make sure that as Singapore pushes forward with Smart Nation 2.0 and AI adoption, security and trust advance in lockstep with that progress,” says Ng.

Changes in parallel

 

AI stands as an impactful force shaping both the strategies of cybercriminals as well as frontline cybersecurity teams.

 

Phishing and social engineering attempts, crafted by AI, are increasingly more sophisticated, more convincing, targeted, and harder to detect.

 
Fortinet’s Country Head for Singapore & Brunei, Jess Ng. Image: Fortinet.

Ng warns that this trend is even more pronounced across Asia Pacific with attackers using AI-generated deepfakes, automated reconnaissance, and adaptive malware to accelerate every stage of an attack.

 

“What has fundamentally changed is the speed, scale, and autonomy of attacks. Tasks that once took days or weeks, such as mapping networks, profiling users, or crafting tailored lures can now be completed in minutes.

 

“Attack campaigns can adjust in near real time based on how defenders respond, such that cybercrime operations are beginning to approach the sophistication of nation-state campaigns,” says Ng.

 

This level of sophistication means that public sector agencies can no longer rely on traditional, perimeter-centric defences and manual processes.

 

“AI can help cybersecurity teams process massive volumes of data at speed, correlate signals across millions of events and identify subtle behavioural anomalies that would be impossible for humans to spot manually,” shares Ng.

 

She identifies three key benefits for public sector security teams:

  1. AI reduces noise by automatically triaging alerts and filtering out false positives, allowing analysts to focus on the incidents that truly matter.
  2. AI enables faster, more consistent response through orchestration and automation, containing threats, blocking malicious activity, or enforcing controls in near real time.
  3. AI acts as a force multiplier for talent, supporting over-stretched teams by handling repetitive tasks and accelerating investigations.

“The goal is not to replace human expertise, but to elevate it, enabling security teams to move from reactive firefighting to proactive defence and resilience building,” notes Ng.

Getting rid of blind spots

 

Ng notes that as organisations digitalise their internal environments, each new initiative typically introduces new tools across networking, cloud, identity, and applications.

 

But this may result in fragmented visibility and inconsistent policy enforcement, she cautions.

 

“Traditional, tool-by-tool approaches create blind spots. Teams may have deep insight within individual systems, but very limited understanding of how threats move across the broader environment,” she explains.

 

A platform-driven, AI-powered approach unifies visibility, intelligence, and response across the digital estate to enhance response times and better detect zero-day threats, without having to scale headcount, Ng notes.

 

“Our approach is built on a simple principle: reducing complexity is essential to improving security.

 

“Our unified platform brings together secure networking and security operations across IT, cloud, endpoint, and operational technology, supported by shared intelligence and AI-driven analytics,” she explains.

 

The platform operates on a common data and policy foundation so that agencies obtain end-to-end visibility and can respond to threats across environments.

 

Ng shares that these automated capabilities helps to quickly enforce protections when suspicious activity is identified, which is critical for fast-moving AI-enabled attacks.

Sustaining digital leadership

 

“Singapore currently has a strong foundation in AI governance and cybersecurity policy. The next phase is about operationalising these frameworks as AI becomes embedded across public services and infrastructure,” notes Ng.

 

This may include translating high-level AI principles into practical security requirements, such as protecting training data, monitoring models for misuse or tampering, and ensuring accountability in AI-driven decisions.

 

Public-private collaboration is another area that can help to strengthen cybersecurity resilience, notes Ng.

 

“Threats evolve faster than regulation, and ongoing information sharing, joint exercises, and shared playbooks are essential to staying ahead of AI-enabled risks,” she says.