Singapore encourages the use of AI to improve cybersecurity posture
By Si Ying Thian
The Cyber Security Agency of Singapore (CSA) will soon announce more details about a public consultation exercise around technical guidelines to secure AI systems and has invited the industry to participate.
Senior Minister of State for Communications and Information of Singapore, Dr Janil Puthucheary, pointed to the need to balance two priorities - innovation and risks - when it comes to AI for cybersecurity and vice versa. Image: Association of Information Security Professionals (AiSP).
Artificial intelligence (AI) is increasingly embedded within cybersecurity solutions to keep up with evolving threats, said Cyber Security Agency (CSA)’s Assistant Chief Executive (National Cyber Resilience), Dan Yock Hau.
Speaking at the AI Security Summit organised by the Association of Information Security Professionals (AiSP) on July 3, Dan said that Singapore needs to quickly incorporate AI capabilities into its cyber defence systems.
CSA will release more details of a public consultation exercise in the following weeks to seek industry feedback around technical guidelines that provide practical advice and resources for organisations seeking to implement security controls for AI systems, said Dan.
This builds on the Singapore government’s existing efforts, including its recently enhanced generative AI (GenAI) governance framework, capacity-building to improve the cyber readiness of small and medium-sized enterprises (SMEs), as well as certifications and standards to assess an organisation’s cyber hygiene.
Knowledge platform for best practices launched
The summit saw AiSP launching an AI special interest group, which will provide a platform for members to exchange knowledge around the developments around AI for cybersecurity, as well as cybersecurity for AI.
Dan underlined the importance of a coordinated effort by government, academia and industry to manage the complex risks posed by AI. He added that this requires a deeper understanding, and frameworks and guidelines for the industry.
Senior Minister of State for Communications and Information of Singapore, Dr Janil Puthucheary, said at the summit that within the government, GovTech Singapore has been developing capabilities to simulate and manage AI-enabled attacks on the government’s products and platforms.
A better understanding of these attacks allows us to put in the right safeguards, he added.
Several cybersecurity specialists, such as Amaris AI’s Chief Innovation and Trust Officer, Professor Yu Chien Siang in his presentation, also pointed to the increasing trend of red-teaming exercises among both private and public organisations.
Red teams comprise of cybersecurity experts who try to aggressively attack an organisation’s cyber defences to probe for potential weaknesses which can then be addressed,
The Singapore public sector tends to have their internal red teamers perform such exercises, and not outsource the tasks to third-party vendors to avoid exposure risk, said softScheck’s Founder and CEO, Henry Tan, in conversation with GovInsider.
A commentary on cybersecurity news site CSO Online notes that continuous red teaming may be the only AI risk defense.
Singapore to balance two priorities
Just as threat actors can integrate AI into their operations, defenders need to learn to master the benefits it can bring, said Dr Janil.
The two Singapore government leaders highlighted the benefits of AI, before delving into the potential risks.
“We will always need to be striking a careful balance between these two big priorities to ensure that we innovate within the [AI] sector,” he added.
He also emphasised the industry's role to spearhead the use of AI in cybersecurity tools, which would enable Singapore to gain a “decisive advantage.”
Instead of watching in despair as bad guys use AI, we need to level the playing field and harness the power of AI to fight these threats, said CSA’s Dan.
AI both a friend and foe to cybersecurity
On both detection and response fronts, Dan pointed out that AI excels in its ability to analyse large amounts of information and automate the defence processes.
Organisations can use AI to produce the first level of information for higher-level analysis and further investigation by cybersecurity professionals, highlighted Dan.
Organisations can also train GenAI to recommend next steps for intervention, highlighted Fortinet’s Jonas Walker in his presentation.
Additionally, GenAI’s natural language capabilities can simplify cybersecurity technicalities and provide actionable insights to employees who may not be experts themselves, said Cisco’s Director, Cybersecurity Sales, ASEAN, Koo Juan Huat, to GovInsider.
On the flipside, the speakers also highlighted that social engineering enabled by AI is posing new cybersecurity threats.
Social engineering refers to the ability of AI to generate human-like interactions, deepfake videos and voice clones.
CSA’s analysis of a small sample size of phishing emails reported to the agency showed that about 13 per cent of them were AI-generated, said Dan.
The frequency of such attacks undermines public trust and confidence in AI, and this in turn affects whether the industry can use AI to drive further growth in the digital economy, said Dr Janil.
Correction: The story has been updated to reflect that 13 per cent of the phishing emails monitored by CSA were AI-generated and not 30 per cent as was published in an earlier version of this story.