What’s the potential of agentic AI for public sector cybersecurity?
Oleh Elastic
Elastic’s CISO, Mandy Andress, highlighted two key aspects to enhancing security for the public sector: speed and context, for which agentic AI might just be the key to stay ahead of potential cyberthreats.
-1762501637413.jpg)
Elastic’s Chief Information Security Officer (CISO), Mandy Andress, delivered a presentation at GovWare 2025, where she shared about the development and future of cybersecurity in the face of emerging technologies. Image: Canva.
Artificial intelligence (AI) is the topic du jour — it helps us work faster, automates extensive processes, and continues to transform businesses and roles at unprecedented speed.
For this reason, it’s essential to critically envision what cybersecurity will look like as this technology evolves, says Elastic’s Chief Information Security Officer (CISO), Mandy Andress.
Singapore’s cybersecurity landscape has seen a rise in phishing attempts in the past year, with an increase by 49 per cent to 6,100 cases compared to last year, according to a report by the Cyber Security Agency of Singapore.
At the same time, Singapore’s public sector is contending with threats that target citizen data and critical infrastructure, like the UNC3886 attacks that were made public by Coordinating Minister for National Security K. Shanmugam in July 2025.
AI, in many cases, acts as the enabler behind these rapidly evolving threats, which has led to increasing demand for cybersecurity talent.
“We need to work and respond much, much faster than we do today… we need to work at the speed of AI, at the speed of machines,” Andress says.
“What AI brings is the ability to deeply understand the context of our environments, which people struggle with today. With that context on a single dashboard, it enables IT and security teams to make rapid decisions.”
Agentic AI can help multiply the effectiveness of limited cybersecurity talent while providing uninterrupted threat detection and response at machine speed. The public sector could unlock unprecedented advantages with this emerging technology.
To subscribe to the GovInsider bulletin, click here.
The evolution of agents and security
“Agentic AI is a collection of agents that each have very specific roles to help complete a process,” Andress explains.
She gives an example where one agent could manage the processes, while the second agent could manage networking. The information gathered through these agents could then be passed to the third evaluating agent.
“I was speaking with one company that was working on vulnerability management and agentic AI, and they called their critique agent the ‘annoying corner case engineer’ - the engineer that's always ‘well, what about this? What about that?’”
With the levels of specificities that agents could have, they could transform future security operation roles, says Andress.
“Over the next couple of years, agentic AI will augment tier one analysts and help them reduce a lot of the alert fatigue, a lot of the screen switching, and allow them to work with higher analytics capabilities,” she notes.
Using agentic AI can also help to increase trust in technology and move into having new roles.
“[more agents] will also introduce new types of roles we'll need to focus on. What does it mean to be an Agent Manager? How do you audit the transparency of agents? How do you know that an agent is doing exactly what you think it's doing, and not going rogue?” adds Andress.
Enhancing security with AI agents
One of the biggest benefits of AI agents was that they “don’t rely on static playbooks,” says Andress.
“Agents can aggregate information on a single dashboard. What an analyst will see is an alert with all of the context that they need to make a decision, zoom in close on an issue, or follow the protocol of a specific organisation, based on their objectives within certain guardrails” she says.
This relieves human analysts from the repetitive tasks of retrieving information from disparate systems, and can help to free up time for more critical tasks.
This is particularly salient as the cybersecurity landscape in Singapore’s public sector has seen a shortage in more technical roles, such as penetration testing and cybersecurity architects, which has enhanced the work toll on the already limited cyber workforce.
To subscribe to the GovInsider bulletin, click here.
The other benefit of agents lies in their proactive nature by having up-to-date threat intelligence, which can be used to optimise detection rules and recommend updates, Andress adds.
“As humans and with the scale of data, we don’t have the capacity to continuously re-evaluate the threat landscape, IT and operating environments, recommend new detection rules, among other changes within a system or IT infrastructure,” she notes.
Agents could assess the environment more efficiently, analysing logs and data for threat behaviour. With this analysis, agents can then recommend updates accordingly to solve false positives and obtain higher fidelity alerts, for example.
“By bringing all of this together, we're going to be able to iterate very quickly and increase our reaction speed, which is the overall theme,” says Andress.
The housekeeping needed
While AI has the potential to change the game, “the underlying fundamentals don’t change,” notes Andress.
Maintaining a base of strong hygiene was key to best leverage the advancements of agentic AI for cybersecurity, she says.
“It's even more critical today, when we talk about the speed at which the threat actors are able to work, you need to have strong fundamentals to help build a defensive posture that doesn't need immediate and always-reactive adjustments,” she explains.
There is a continuous need to foster a culture of security through more awareness and training for people to learn how to better identify threats and adopt a better security stance. Employees will also need to understand the need to take extra steps in processes to ensure security, adds Andress.
This was particularly salient in the face of AI-driven attacks that are increasingly more complicated to identify.
“Deepfakes, phishing, they’re all using AI to craft very sophisticated scams. We can’t rely on the tips and tricks that we used to train our users on, like language that doesn’t make sense, because all that is gone. Everything is highly sophisticated now,” she notes.
To subscribe to the GovInsider bulletin, click here.
The role of employees and leadership
Internally, organisations must check if adjustments are needed to strengthen verification and validation to ensure real-time defence capabilities, notes Andress.
“If you start to see a type of intrusion into your environment, you need that understanding, that this is a business-critical system. This is a threat actor. So I’m going to take an action, I'm going to proactively take some measures to protect [the system]”.
On their end, tech leaders must lead by example by understanding the value of technology and how it can transform operations.
“All of the threat intelligence, machine learning, behaviour analytics, all that doesn't change. We still need the core capabilities that we are using today, and our next steps will always be continuous control and monitoring so that we stay safe,” Andress concludes.