Singapore solved the AI governance paralysis. Here's how

Oleh Mohamed Shareef

The world's first public sector framework for agentic AI shows why smaller governments might actually have the advantage.

For those building digital government in resource-constrained environments, Singapore's updated Model AI Governance Framework for Agentic AI isn't just a policy update - but a strategic blueprint, says Former Minister of State for Environment, Climate Change and Technology in the Maldives. Image: Canva

Singapore dropped a governance bomb on January 22. Many governments will miss what just happened.  

 

Singapore’s Infocomm Media Development Authority (IMDA) published the Model AI Governance Framework for Agentic AI - the world's first comprehensive public sector guide for autonomous AI systems that don't just recommend or assist.  

 

They act. 

 

While Brussels spent years crafting the AI Act and Washington fragments across fifty states, developing countries have been stuck in a different trap: waiting for the "perfect" regulatory framework before doing anything.

 

Singapore just showed there's a third way. 

 
For those building digital government in resource-constrained environments, this isn't just a policy update. It's a strategic blueprint. 

Iterative model of governance 

 

Every government technology agency faces the same problem. Technologies evolve faster than policy cycles.


By the time you've consulted stakeholders and drafted regulations, the technology you're governing has already evolved three generations. 

 

The solution isn't faster legislation. It's different governance architecture. 

 

Singapore's framework is explicitly designed to evolve.  

 

Sector-agnostic, principle-based, built for iteration rather than perfection. Implement version 1.0 in six months, learn from deployment, release version 1.1 in the next six months.  

 

This iterative model actually suits fast-moving AI better than comprehensive legislation that's outdated before enactment. 

Four dimensions for the pragmatic planner 

 

What distinguishes this from earlier governance theater is brutal specificity. The framework tells you exactly what to assess, bound, test, and monitor: 

 

Capability-Based Risk Framing: Distinguishes systems by their action-space (what they can access) and autonomy (how independently they decide). Does your agent have read-only database access or can it write? Can it send emails or only draft them? These granular distinctions determine whether you're automating workflow or creating 

systemic risk. 

 

Addressing Automation Bias: As agents become more capable, human oversight becomes simultaneously more critical and more difficult. The framework provides 

practical guidance on defining approval checkpoints and auditing whether they remain effective over time. 

 

Technical Controls with Actual Specs: Specific testing for task execution accuracy, policy compliance, tool-calling precision. For agencies with limited AI expertise, this specificity is invaluable. 

 

Tiered Transparency: Recognises that citizens interacting with agents need different information than employees integrating agents into workflows. 

Turning resource constraints into design advantage 

 

Here's where developing countries and small nations can flip the script entirely. 

 

The framework assumes continuous monitoring capacity most governments lack. Rather than seeing this as a barrier, embrace it as a design advantage.  

 

If you can't monitor workflows 24/7, design agents that don't need 24/7 monitoring.  

 

Narrower action spaces, stricter human approval requirements, built-in rollback mechanisms. These aren't compromises. They're forcing functions for better design. 

 

Singapore is already proving this works.  

 

GovTech Singapore's Agentic AI Primer, published in April 2025, guides agencies on deploying autonomous systems.  

 

They're testing multi-agent setups for data analysis - modeling AI agents as government officers each working on specialised tasks. Starting with internal before going external, the government started with document processing and data analysis before citizen-facing services. 

 

Singapore's CSIT, GovTech, and HTX are deploying agentic AI through partnerships with Google Cloud, testing systems in air-gapped environments.  

 

They're building a dedicated AI agents sandbox to evaluate solutions before issuing deployment recommendations. 

 

This is governance through experimentation, not speculation. 

Reversing the deployment pattern 

 

The framework enables something most governments get backwards.  

 

Instead of private sector implementing first and government regulating later, prioritise public sector deployment to build internal capacity before attempting to regulate private sector use. 

 

When the government implements first, you create practitioners who understand the technology's actual behavior, not just marketing claims.  

 

That expertise transforms regulatory conversations from theoretical to operational. 

 

Trust as critical infrastructure 

 

For countries building e-government systems from the ground up, there's a strategic window. You're building digital infrastructure anyway.  

 

Integrating agentic AI governance from the start costs far less than retrofitting later. 

 

Two decades of building digital infrastructure taught me this: the marginal cost of good governance at the design stage is far lower than remediation after deployment. 

 

Trust is our most critical infrastructure. One badly deployed system that makes 

discriminatory decisions sets back digital transformation by years. Recovery takes exponentially longer than prevention. 

 

The framework's emphasis on human accountability, testing, and transparency 

provides tools to build public confidence.  

 

When citizens understand that AI agents have defined boundaries, require human approval for significant actions, and are continuously monitored, adoption resistance decreases. 

Why this window matters 

 

The framework maps to OECD AI Principles and Global Partnership on AI Standards (GPAI) standards.  

 

For developing countries, this interoperability is strategic. When your fintech companies want regional market access, demonstrating alignment with recognised frameworks becomes competitive advantage. 

 

IMDA is actively seeking feedback and case studies from governments deploying agentic AI. This is governance as open source - designed to evolve through collective learning rather than centralised perfection. 

 

For nations ready to move: adapt through workshops in months 1-3, deploy in 

sandboxed environments through month 12, expand to external services by month 24. The pathway is clear. 

 

Singapore's framework arrives when governments need to reject false choices. You don't choose between moving fast and moving responsibly.  

 

You don't wait for Western regulatory frameworks to mature. You don't sacrifice governance quality for implementation speed. 

 

Agentic AI will transform public service delivery. That's not a question. The question is whether governments will govern that transformation with agility that matches thetechnology's pace of change. 

 

Singapore just showed it's possible. The blueprint is public. The window is open. 

 

Read also:
Why Asian governments are measuring the wrong things, Jan 26, 2026

 

-----------------------------------------------------

 

Mohamed Shareef is a former Minister of State for Environment, Climate Change and Technology in the Maldives (2021-2023). He previously served as Permanent Secretary of Science and Technology Ministry (201S-2021) and the Chief Information Officer at the National Centre for Information Technology (200S-2014) and led the development of the country's national digital public infrastructure. He also served in the academia including as a researcher at the United Nations University. He currently serves as Senior Advisor for Digital Transformation at Nexia Maldives.