Singapore launches Centre for Advanced Technologies in Online Safety (CATOS)

By Si Ying Thian

Government R&D agency, A*STAR, has launched the Centre for Advanced Technologies in Online Safety (CATOS) to advance technological efforts to build a safer online ecosystem in Singapore.

Minister for Communications and Information Josephine Teo at the exhibition hall checking out the tech solutions led by A*STAR researchers. Image: CATOS.

The Singapore government is investing S$50 million (US$37 million) to fund the new Centre for Advanced Technologies in Online Safety (CATOS) over five years to develop tech solutions that will tackle online harms.

 
CATOS' centre director, Dr Yinping Yang, pointed to three key pillars guiding the centre's work: Deep tech, system engineering and seamless user applications. Image: CATOS.

CATOS will build on existing regulatory and digital literacy efforts as “another useful tool in [the government]’s arsenal in the battle against online harms,” said Minister for Communications and Information Josephine Teo at the 15 May launch event.  

 

The centre is hosted by A*STAR, and supported by the Ministry of Communication and Information (MCI) and the National Research Foundation.

 

Current efforts range from legislation like the Personal Data Protection Act and the Cybersecurity Act, as well as IMDA’s Digital Skills for Life framework that equips and empowers individuals with skills to navigate cyberspace.

 

Speaking at the launch, CATOS’ centre director, Dr Yang Yinping, pointed to three key pillars that would guide CATOS work: Deep tech research to keep up with the speed of threats, system engineering to translate research into real-world solutions, and seamless user applications across the public and private sector.

Fighting AI-generated harms with AI-powered tools

 

Last December, a deepfake video showing Singapore’s former Prime Minister Lee Hsien Loong promoting an investment platform designed by Elon Musk was circulated on Facebook.

 

AI has propagated online harms – both in speed and scale – but it can also power the tools to fight them. While AI helps bad actors to generate deepfakes, misinformation and disinformation, other AI tools can help to detect them and take preventive measures.

 

At the event, the plenary sessions highlighted the increasing concerns of AI-generated harms, while the exhibition hall was dominated by tools using the same technology to fight these harms.

 
Stanford University's Cyber Policy Centre Co-Director, Prof Nate Persily, highlighted the greater challenge of regulating AI models than social media platforms. Image: CATOS. 

In his keynote talk, the Co-Director of Stanford University’s Cyber Policy Centre, Prof Nate Persily, noted that it is more difficult to regulate AI than social media platforms.

 

“When it comes to social media, we’ve got three big [social media] companies that can be regulated. When you look at AI models, there are 60,000 versions of these open models that are available.

 

“As much as we want to logically think about [regulating] this new type of environment like the social media environment, I think it’s quite challenging.”

 

To keep up with the rise of AI-powered threats, Stanford Cyber Policy Centre recently rolled out AI tools to help smaller industry players moderate content better, as they may not have robust trust and safety teams.

Translating research into real-world solutions

 

According to its official press release, CATOS will focus on talent development, open calls for research proposals, community engagement, and establishing a sandbox environment to test and refine new technologies.

 

CATOS has been in the making for a year. Leaders have been collaborating with diverse stakeholders, including government, industry, academia, and civil society, to develop and deploy advanced tech to address online harms.

 

For example, the Public Transport Council, a statutory board under Singapore’s Ministry of Transport, has been partnering with A*STAR to use a social monitoring platform to track intense emotions and potential falsehoods on the internet, especially those that are likely to go viral.

How CheckMate works. Image: CheckMate.
 

The partnership happened before the setup of CATOS.

 

Speaking from the civil society’s perspective in a panel discussion, Cyber Youth Singapore’s Ben Chua underlined the need for consolidated efforts.

 

“There are a lot of efforts going on... but there’s an issue we have that the efforts are not consolidated so they dissipate as quickly as they start,” he explained.

 

Beyond consolidating the tech solutions developed by the Online Trust and Safety team at A*STAR, CATOS will work with community partners like CheckMate, a volunteer-driven initiative aimed at countering online misinformation and scams.

 

The partnership aims to build their capacities with CATOS’ in-house fact-checking technologies.

 

Users can forward dubious information to a WhatsApp number and receive checks in response.
 

CheckMate started as a passion project founded by a team of Singapore public sector employees volunteering for better.sg, a tech for public good movement.

 

The launch also saw the signing of an agreement between CATOS and Adobe to implement content provenance and authenticity tech in Singapore. These tools provide information to end users about the origins and alterations made to digital content.

 

CATOS also aims to make such tech available for end-users through plug-ins and software solutions.

Growing concerns for online trust and safety

 

Speaking at the panel, Lee Huan Ting, Director of Information Policy, MCI, said Singapore takes a consultative approach with regulating tech.

 

As there is an asymmetry of information between common technology users and Big Tech companies, Lee said that it is important for the ecosystem to come together and for the research community to identify the problems and solutions.

 

Lee also pointed to the concerns governments have when it comes to AI-generated misinformation.

 

While the impact of AI on electoral outcomes and voter behaviors is clear in India, US and the EU, the effect “remains to be seen” in Singapore but nevertheless remains worrying for regulators, he said.

 

The second concern is the growing distrust and the ability of people to assess information online. This lack of trust reduces people’s confidence to interact with the Internet, which may lead to them retreating from it. This leads to a “loss situation for society,” he said.

 

Cyber Youth Singapore's Ben Chua said that youth are not exempted. Despite being digital natives, they will have to re-learn the processes of digesting and interacting with information online.