Cyber resilience a leadership and societal challenge

By Amit Roy Choudhury

Speakers at the Festival of Innovation shared how just raising awareness of cyber risks was not sufficient and there was a need to build a “muscle memory” for cyber hygiene across society.

Speakers at panel session,Who Holds Responsibility in a Connected Era, were unanimous that cyber resilience was no longer just an IT issue but a leadership, organisational, and societal challenge. Image: GovInsider. 

While increased digitalisation of government systems has improved public sector service delivery, it has also exponentially increased risk. 

 

In September last year, the Cyber Security Agency of Singapore (CSA) said phishing, ransomware, and infected hardware were the nation's major cyber threats. 

 

During the recent Festival of Innovation (FOI) 2026 organised by GovInsider, there was considerable interest among the attendees in presentations and panels that focused on cybersecurity. 

 

At one such panel discussion, Who Holds Responsibility in a Connected Era, speakers were unanimous that cyber resilience was no longer just an IT issue but a leadership, organisational, and societal challenge.  

 

Access Partnerships’ Executive Vice Chair, Gregory Francis, who moderated the session, framed cyber resilience as “the hardest thing to manage from a government post” because “governments must plan for the worst while everyone else budgets for sunny days”. 

 

The panellists agreed that cyber responsibility has shifted from IT managers to CEOs and boards, and that simply raising awareness was not enough, as people still fell for scams under pressure, despite massive public education campaigns.  

 

Former military officer and communications leader, Justin Fong, noted: “It was an illusion that awareness [of cyber risks] was protection or immunity”. 

 

He added that everybody knows about cybersecurity risk, and “yet people still fall for it”. 

Shared responsibility 

 

Malaysia’s Jabatan Digital Negara’s MyTCoE Principal Assistant Director, Sri Lakshmi a/p Kanniah, said protecting networks from cyberattacks must be a shared responsibility. 

 

“We cannot just put the responsibility on the regulators; there was a need to educate the citizens, and also the service owners must be aware of the responsibility that they carry," Lakshmi said. 

 

Singapore IT industry body, SGTech’s Chair for Cybersecurity, Genie Sugene Gan, cautioned that when responsibility was shared, everyone was responsible.  

 

“You end up with nobody actually taking the responsibility in the end," she said. 

 

The best way forward, according to Gan, was to design cyber resilience into organisations through “muscle memory building simulations”, security-linked key performance indicators (KPIs), procurement and human resource policies. 

 

“The idea was to build up cyber hygiene so that it becomes as instinctive as locking your front door,” she said. 

Top-down approach crucial 

 

Infocomm Development Authority of Singapore (IMDA) Director, Angela Wu, said top-down support and resourcing were critical to build cyber resilience.  

 

She noted that while there was usually a strong commitment to cybersecurity among the frontlines, these efforts were often constrained by insufficient financial and organisational backing from senior leadership.  

 

“Recent incidents show that organisations need sustained investment in skills, tools, and processes to ensure both security and operational sustainability,” she said.  

 

In Wu’s view, cybersecurity cannot be treated as a side issue; Government Ministers, Boards, and CEOs must actively prioritise and fund it if they expect their teams to build and maintain resilient systems. 

 

Wu stressed that early reporting of “small” suspicious activities was crucial, and that staff should be encouraged and not punished for raising concerns.  

 

Cross-sector information sharing via Information Sharing and Analysis Centres (ISACs) and other channels helped organisations learn from one another, she said. 

 

JDN’s Lakshmi emphasised the importance of communicating with society at large when the government develops regulations and policies. 

 

“They [the society] don't know where to go, because there are so many channels and so we must communicate what the regulations are and what accountability they hold, and what responsibility they have in terms of combating these cybersecurity incidents," she said. 

Leadership and ecosystem responsibility 

 

CyberSG TIG Collaboration Centre, Executive Director, Willis Lim, shared that cyber resilience had evolved from being seen as a narrow IT issue to an ecosystem responsibility. 

 

Lim noted that over the past eight years, meetings on cybersecurity that once drew only IT managers now attract CEOs and chief information officers (CIOs).  

 

Speaking from his vantage point working with startups, Lim observed that startups were not inherently careless about security but were often focused on survival and speed; “the real gap was that they encounter regulations too late and are forced into costly retrofitting to meet sectoral requirements”.  

 

Looking ahead, Lim warned that as companies rapidly integrate artificial intelligence (AI) and agents into workflows, it would become “very scary” when software starts making decisions on behalf of organisations. 

 

He called for deliberate mapping of where AI sits in business processes and clear accountability when AI-driven decisions go wrong. 

How to control rogue AIs 

 

Controlling AI was the theme of the presentation, How the evolution of AI is redefining cyber risk and trust, by CSA’s Director, Safer Cyberspace Division, Veronica Tan. 

 
Veronica Tan CSA’s Director, Safer Cyberspace Division, shared that AI safety should be treated as “layered security”, building on existing cyber controls rather than replacing them. Image GovInsider.

Tan defined AI in three distinct categories according to their abilities. 

 

The basic category was what she called narrow AI, that is, a programme built to do one specific job very well, like recommending movies or detecting fraud, but one that cannot work outside that narrow task. 

 

The next was generative AI (GenAI), programmes that could create new content, such as text, images, or audio, based on patterns it had learned from large amounts of data. 

 

The third was Agentic AI, a programme that didn’t just respond to prompts, but could make plans, take actions on its own, and use tools or systems to achieve goals within set boundaries. 

 

Talking mainly about Agentic AI, Tan noted that research and real-world tests have shown that Agentic AI systems could engage in harmful actions such as blackmail, ignore instructions, or “sandbagging” by appearing safe in evaluation but pursuing hidden goals once deployed. 

 

“Combined with familiar challenges like bias, hallucinations, opaque decision-making, and IP or data misuse, these behaviours create serious implications for national security, regulatory compliance, and public trust,” she said. 

AI security needs to be layered 

 

Tan noted that AI safety should be treated as “layered security”, building on existing cyber controls rather than replacing them.  

 

She said policymakers could frame requirements in tiers: start with strong cybersecurity and data governance as a baseline and then add AI governance for narrow AI (covering explainability, fairness, and data minimisation). 

 

The next step would be to introduce GenAI safeguards for content risks (provenance, transparency that users are interacting with AI, and IP protections). 

 

Finally, there was a need for agentic AI controls such as agent identity management, context-specific permissions, and mandatory kill switches.  

 

“Human oversight should be explicitly defined at each tier, from human approval for high-impact decisions to clear boundary-setting and emergency shutdown procedures for autonomous agents,” Lim said.