Public sector must focus on developing trustworthy AI for daily use
By Amit Roy Choudhury
Speakers at GovInsider’s AIxGov 2026 event share how the public sector is moving from the hype of AI to incorporating the technology in daily use with adequate safeguards.

Speakers at the opening panel, Building AI-Ready Government. From left to right: Kyndryl Head of AI & Innovation ASEAN & Korea, Vishnu Nanduri, National University Health System (NUHS) Head, Group Service Transformation Office, Kimberly Zhang, NHG Health Assistant Director, Lead Data Scientist, Zhao Wanting, AI Singapore Head of Model Development, Ngui Jian Gang and Agency for Science, Technology and Research (A*STAR) Director Special Projects, Angela Chee. Image: GovInsider.
The Singapore public sector has been focusing on building the infrastructure to securely scale artificial intelligence (AI) adoption, including clear governance frameworks, technical safeguards to bake guardrails into AI, and trusted data.
This was the key message at GovInsider’s AIxGov event held on May 5 in Singapore.
AI Singapore’s Head of Model Development, Ngui Jian Gang, noted that an AI model was not a capability, but an artefact.
Speaking at the panel titled Building AI-Ready Government, Ngui gave the example of Singapore’s large language model (LLM), SEA-LION, which was built as a public infrastructure for Southeast Asian languages.
This has allowed others to fine-tune their models rather than building from scratch, Ngui said.
Various speakers noted that an AI-ready government was less about flashy models and more about local capability, trust, and real problem-solving.
In the same panel, NHG Health’s Assistant Director, Lead of Data Scientist, Zhao Wanting, noted that AI professionals needed to develop expertise to interact with technical teams, domain experts, and senior management.
“We need someone in the middle of data and domain experts, and senior management… [who can] communicate with each to identify pain points, translate them into solvable technical problems, and explain to senior management,” she said.
Changing funding models
Speakers at the fireside chat, Beyond the global stack: National AI capabilities as a public good, noted the need to change the funding process from the traditional waterfall approach, which focused on year-by-year Capex/Opex models, to agile block funding approaches.
AI Singapore’s Senior Director of AI Products, Leslie Teo, emphasised the need for organisations to develop capabilities to understand how deeply the publicly available models replicate their data, identify fault lines, and determine appropriate usage.
NHG Health’s Head, Digital Services, Clinical Shared Services, Dr Kevin Kok, said having access to locally developed models like SEA-LION has allowed the healthcare sector to enhance capabilities for specific use cases rather than build from scratch with limited resources.
Developing trusted data
At a fireside chat titled Trusted data by design, National University of Singapore (NUS)’s Adjunct Associate Professor, Loke Chok Kang, observed that traditional governance frameworks often fail because they “operated external to AI systems”, rather than being embedded at the point where data was used, and decisions were made.
“When we talk about trusted data by design, what we are really saying is that trust cannot be added after the fact.
“It has to be engineered into the system from the start, built into the data flows, access controls, and even how your AI systems generate the outputs,” Loke said.
Touching on digital inclusion, Loke noted that services like Singapore’s SingPass enable “incredible seamless services but also assume a certain level of digital access and familiarity”.
When designing services, there was a need to consider how to develop systems “that were just as seamless for those who may not be as digitally confident”.
Moving beyond demos
After trust and funding mechanisms have been established, the next step was to move beyond impressive demos to tools that could reshape day-to-day work.
Singapore General Hospital (SGH)’ Assistant Director, Future Health System, Jonathan Tan Yue En and KK Women's and Children's Hospital (KKH)’s Assistant Nurse Clinician, Advanced Practice Nurse, Lai Liling, shared that non-technical healthcare staff were increasingly using low-code tools and government-secure GPT models like Pair to build solutions.
Lai noted that her team built a nursing information chatbot because the needs of the ward were too urgent to wait for traditional development cycles.
Speaking at the fireside chat Beyond the demo: Bridging the gap between AI prompt and public impact, Tan noted that teams could innovate freely within safe boundaries without a conflict with compliance requirements if they scoped experiments appropriately.
Adding to Tan’s point, Lai noted that clear regulatory frameworks help builders by reducing uncertainty and enabling them to confidently deploy to colleagues.
“Oversight helps us stop second-guessing ourselves because then we know what we can work with… It's a safety net for us,” she noted.
Agentic AI in defence sector
SAFTI Military Institute (part of Singapore’s Ministry of Defence and Singapore Armed Forces)’s Dean, Pang Chee Kong, said the Singapore defence sector was using agentic AI to augment research, learning analytics and decision support in high-stakes environments.
Speaking at the fireside chat titled Scaling agentic workflows in public service, he said smart classrooms and automated analysis had significantly improved productivity in the defence sector.
He, however, added an important caveat: humans need to stay in the loop.
Pang noted that domain experts were best positioned to identify AI hallucinations because they could immediately recognise when outputs contain errors, inconsistencies, or implausible conclusions.
His advice was: “Be a subject matter expert… You know the problem statement. If something comes up as gibberish, spelling mistakes, or the sample size is different, you should be the first to address that and give prompts to guide back.”
Fully autonomous AI impractical
Pang said the notion of fully autonomous AI systems making consequential decisions without human intervention was both impractical and undesirable in the public sector.
“Organisations must implement multiple layers of protection, including man-in-the-loop, expert engineering checks, and ethical frameworks to ensure AI systems operate within acceptable boundaries,” he added.
On AI governance, Infocomm Media Development Authority of Singapore (IMDA)’s Assistant Director (Data Analytics and Intelligence), Jonathan Samraj, said public agencies should operate under the assumption that they have already been compromised.
This mindset shift enabled better focus on detection speed and protection enhancement using AI, rather than solely focusing on prevention, Samraj said while talking at a session on cybersecurity in the age of AI.
"There's this concept of a sort of assumed breach whereby you have already been attacked. How can we then use AI to make sure that one, we can detect this as fast as we can and better still, how can we use AI to protect ourselves better?", Samraj added.
He noted that while the public sector usually shared technical information about vulnerabilities and attacks, the challenge was more about overcoming the resistance to sharing what works in tackling these.
“We need to protect our agency's missions, and sometimes that goes to the extent that we are hesitant to work together. This is precisely what comes back to harm us,” Samraj noted.
Across different sessions during the event, the coherent agenda that emerged was that scaling AI in the public sector rested on building trusted data, empowering frontline innovators within smart guardrails and investing in local capabilities.
This would equip Singapore public servants with the tools to lead an AI-driven change.