Yang Yinping, Director, Centre for Advanced Technologies in Online Safety (CATOS); Senior Principal Scientist, A*STAR Institute of High Performance Computing, Singapore

By Amit Roy Choudhury

Meet the Women in GovTech 2025.

Yang Yinping, Director, Centre for Advanced Technologies in Online Safety (CATOS) and Senior Principal Scientist at A*STAR Institute of High Performance Computing, Singapore, shares her life's journey. Image: Yang Yingping.

1) How do you use your role to ensure that technology and policy are truly inclusive?


As Director of the Centre for Advanced Technologies in Online Safety (CATOS) and Senior Principal Scientist at the A*STAR Institute of High Performance Computing (A*STAR IHPC), Singapore, I lead the development and application of advanced technological solutions to strengthen online trust and safety.


I work alongside multidisciplinary, multi-functional teams spanning systems engineering, deep tech research, and the programme coordination office to develop robust technologies for needle-moving applications to safeguard online safety.


A key strength of our team is its diversity – bringing together artificial intelligence (AI) scientists, computer engineers, social scientists, and experts in operations, policy, and education.


Through open dialogue, strategic partnerships, and a strong appreciation for each other’s perspectives, we anticipate how various communities experience harm online and combine high-quality ideas to advance our mission and vision.


This approach ensures our work remains relevant, innovative, and practical, especially in the rapidly evolving technology landscape.

​​​​​​​2) What’s a moment in your career when you saw firsthand how technology or a new policy changed a citizen’s life for the better?


My role in CATOS, which hosts Singapore’s Online Trust and Safety Research Programme gave me firsthand insight into how policies such as the Online Safety (Miscellaneous Amendments) Act, effective February 2023, and the new Online Safety (Relief and Accountability) Act 2025, passed in November 2025, are enabling Singapore to take decisive actions to safeguard the public, especially vulnerable groups like children, from a wide spectrum of online harms, including but not limited to misinformation and cyberbullying.


These policies and technological research and development (R&D) initiatives, including the creation of CATOS,  represent multi-pronged approaches aimed at enhancing online trust and safety in Singapore.   

​​​​​​​3) What was the most impactful project you worked on this year, and how did you measure its success in building trust and serving the needs of the public?


One of the most impactful projects this year has been the development of the Online Trust and Safety Toolkit (OTS Toolkit™), a platform technology developed by CATOS to strengthen digital safety and public trust.


​The toolkit enables detection and mitigation of harmful online content, including deception, manipulation, and toxicity. 


Its signature technologies, SLEUTH and CRYSTAL, have supported compliance efforts and ground sensing, and have been adopted by organisations in Singapore to enhance digital safety.


Specifically:


  • SLEUTH™ detects deepfake videos, audios and images by analysing pixel-level inconsistencies that the human eye cannot spot, helping public agencies and media companies identify manipulated content before it spreads.
  • CRYSTAL™, also known as the CrystalFeel™ portfolio, is an integrated artificial AI engine capable of multimodal, multidimensional and multilingual sentiment, emotion and hateful content analysis. It “crystallises” meaning from text, tone, and facial expressions, including both explicit and implicit signals. This year, this signature technology was deployed to support ground sensing and policy enforcement by analysing online signals to identify harmful content trends early, enabling proactive interventions.

Beyond detection and sensing, a key milestone was introducing PROVO™ at the Singapore International Cyber Week in October.


PROVO acts as a digital authenticity signer by attaching key details - such as the original publisher, posting date, and edit history to images and videos.


This helps users, news publishers, and content creators verify authenticity and address challenges posed by increasingly sophisticated deepfakes.


Traditional detection techniques, such as spotting unnatural blinking, are no longer reliable, as deepfakes have become highly refined.


PROVO complements detection tools, helping address the ongoing challenge of evolving manipulation techniques.


There are plans to make PROVO available for public use in 2026. 

​​​​​​​​​​​​​​4) What was one unexpected lesson you learned this year about designing for real people? This can be about a specific project or a broader lesson about your work.


I was inspired by our seniors' enthusiasm for digital learning.


Through engaging the public at community events such as Singapore National Library Board's (NLB’s) Be SURE Together festival and the Infocomm Media Development Authority of Singapore's (IMDA’s) Digital for Life festival, I saw firsthand how eager our seniors are to learn about generative AI, deepfakes, and digital safety.


Their passion and desire to learn have influenced our priorities, motivating us to develop more accessible public education initiatives in the coming year.

​​​​​​​5) We hear a lot about AI. What’s a practical example of how AI can be used to make government services more inclusive and trustworthy?


I can provide a practical example of emotion AI.


An advanced emotion analysis engine developed through research programmes I have led, like CrystalFeel, can deliver deep, real-time insights into public sentiment and opinion.


CrystalFeel and its sister technologies leverage large-scale social data to identify emerging needs, concerns and priorities within the community.


To subscribe to the GovInsider bulletin, click here


By translating these emotional signals into actionable intelligence, emotion AI empowers decision-makers to craft policies and communications that are more responsive, empathetic, and effective.  


For example, during the Covid-19 pandemic, my team used CrystalFeel to support public health research by providing insights into the community’s emotional well-being.


Our recent findings also suggest that emotion indicators can significantly enhance the ability to predict public demand for mental healthcare, opening new possibilities for proactive mental health management during a crisis.

​​​​​​​6) How are you preparing for the next wave of change in the public sector? What new skill, approach, or technology are you most excited to explore in the coming year?


I am excited about the next wave of technological milestones, including durable content verification, live deepfake detection, and real-time emotion analysis.


Staying ahead of emerging trends, including the opportunities and unknown risks, such as examining the benefits and risks of large language models and AI companions, is a key approach of my team to continue to use “Tech to Safeguard Tech” and responsibly innovate for the public good.

7) Outside tech, what excites you the most?


While I am passionate about technology, I value human connection above all.


I enjoy travelling, seeing the world and interacting with people, especially with those who help me appreciate the precious value of human connection.