How governments can move from endless AI projects to measurable wins

By Qlik

Qlik’s leadership at a recent analyst meeting shared that successful government AI projects can defeat pilot paralysis by shifting their focus to accountable automation and quick-win use cases.

Too often, ambitious tech initiatives become budget sinks that never reach the public, leaving government agencies grappling with what is called the “pilot paralysis”. Image: Canva

The biggest threat to government artificial intelligence (AI) projects isn't the technology, but the lack of measurable outcomes.


Too often, ambitious tech initiatives become budget sinks that never reach the public, leaving government agencies grappling with what is called the “pilot paralysis”.


At Qlik’s recent analyst meeting, the key takeaway was that a strategic shift is needed to move from endless development to rapid, accountable wins.


Speakers at the meeting included Qlik’s Asia Pacific’s Senior Vice President, Maurizio Garavello; Qlik’s Chief Technology Officer, Charlie Farah, and AI Asia Pacific Institute’s President and Executive Director, Kelly Forbes.


They suggested that the problem isn't failure of the technology, but a fundamental flaw in how we scope and execute AI projects.


The path to successful public sector AI, they argue, is prioritising accountable automation and quick-win use cases.

Where AI projects go to languish


While AI models are highly accessible, the real challenge tends to be the lack of data trust and organisational alignment.


Departments are often reluctant to share information, citing ownership or data quality concerns. This struggle to break down internal silos has been a critical hurdle, for both private and public organisations alike, preventing AI projects from moving pilot to production.


This organisational inertia prevents any large-scale project from establishing a solid, reliable data foundation.


Qlik’s leadership therefore positions the conversation to shift from buying technology to investing in governance.


As a result, the data integration platform has introduced a new standard within their Talend Cloud solution known as the AI Trust Score, which assesses whether data is ready for AI by measuring AI-specific dimensions like diversity, timeliness, and accuracy.


Agencies can rebuild internal confidence in shared datasets, which also becomes an essential prerequisite that allows agentic AI systems to function effectively, ensuring outcomes are based on verified, high-quality data.


For example, Qlik worked with Malaysia’s AmBank to create a single source of trust through a robust data governance strategy.


The bank leveraged Qlik Talend’s ETL process and data quality to unify data from siloed systems across the bank, and later tapped into Qlik Sense Client-Managed to centralise its analytics and reporting.


This project has reduced the turnaround time for compliance documentation requests from two months to just four hours, and banking employees can access key insights that drive better business outcomes.


In short, the solution to scaling AI projects isn’t more technology, but a strategic investment in foundational organisational governance.


To subscribe to the GovInsider bulletin, click here

The potential of automation agents


Instead of building one massive, multi-year system to overhaul an entire agency’s operations, the agency could build an agent to tackle a single task to quickly show results to get leadership buy-in.


The single task might include automating the initial triage of fraud claims in a social service agency, verifying documentation for a business permit application, or generating automated updates for constituents based on predefined regulatory changes.


Agentic AI, in simple terms, could help break down a large business process into a series of smaller, self-contained, and highly focused automation agents.


As the Qlik team explained, this could help make automation “more digestible” by sizing it down. And this strategic downsizing could offer an immediate benefit to budget holders: measurable returns on investment (ROIs).


Additionally, this approach flips the traditional IT project mentality.


Instead of waiting three years for a holistic outcome, agencies can now track their losses or gains after six months of using an agent.


The project builds momentum based on demonstrated success, rather than mandates, they emphasised.

The case for accountable automation


For civil servants, the principle of accountability is key in adopting any autonomous systems like agentic AI.


Agentic systems must not be black boxes. In a public sector context, every decision that affects a citizen – be it an entitlement claim or a tax assessment – must be auditable.


The Qlik team explains two key governance components: Data trust and quality, as well as explainability and audit trails.


Before any agent is deployed, the underlying data must be rigorously assessed. This is where AI Trust Score comes into play.


Agencies must move past arguments over data ownership and collaboratively establish metrics for data validity, completeness, accuracy, and timeliness. The score provides an objective and standardised measure of confidence in the data sets powering the AI agents.


As for explainable AI, this ensures that a human operator, who is the civil servant in this case, could inspect the agent's logic, understand exactly which data sources were used, and identify the algorithms that led to a specific recommendation.


The “human in the loop” principle preserves essential public accountability and mitigates the risk of an unchecked one per cent data error snowballing into a "dramatic" policy outcome.


The experts concluded by pointing to the “gigantic differential” created by the 10 per cent of AI projects that succeed.


According to the discussion, these are the organisations that have embraced smaller, governed, and outcome-focused agentic systems.


These outcomes are powered by the Qlik Associative Engine, which ties together data trust, explainability, and agentic AI. It does this by analysing data contextually, similar to how a human thinks.


For governments worldwide, the message is clear: the path to successful digital transformation is paved not with endless AI projects, but with rapid, accountable wins for high-impact use cases.