Transparent processes must anchor public sector AI use
By Raymond Ngan
This allows for human oversight over AI functions, which, in turn, builds trust and process efficiency, according to Appian.

Off-the-shelf standalone artificial intelligence (AI) tools, while attractive, are not as effective since it is difficult for these to function effectively within highly regulated and complex data environments that exist in government agencies. Image: Canva.
Public sector leaders around the world believe that artificial intelligence (AI) is essential for meeting citizen expectations amid staffing shortages and budget constraints.
The reason is straightforward.
AI has the potential to deliver real public sector impact. But this is only achievable when it is anchored in strong, transparent processes with clear accountability and purpose.
In practice, this means embedding AI within the workflows that govern how work is done - from decision-making and approvals to service delivery and compliance - rather than treating it as a standalone tool.
This starts with defining how work should be executed, before introducing AI to enhance specific steps within that process.
Off-the-shelf standalone AI tools, while attractive, are not as effective because they struggle to function within highly regulated, complex data environments in public sector agencies.
The important aspects for AI are having a clearly defined scope, traceable and auditable systems, and incorporating human-in-the-loop.
A process-first approach creates a foundation for scaling AI responsibly across programmes and agencies, where decisions carry legal, financial and real-world consequences for citizens.
AI adoption is accelerating, but impact remains uneven
Across Australia’s public sector, AI adoption is accelerating rapidly, but translating that momentum into meaningful operational impact remains a challenge.
Recent Appian research shows that 70 per cent of public sector workers now use AI as part of their daily tasks, up from 58 per cent just one year earlier. This reflects how quickly AI has moved from experimentation into everyday use across government.
However, this rapid uptake is being constrained by underlying structural issues. Around 72 per cent of public sector workers report challenges with disconnected systems and data, while more than half say they are working with incomplete or inaccessible information.
At the same time, confidence in AI is increasing. Nearly 68 per cent of workers say they understand the tools they are using. But confidence alone is not enough to deliver consistent outcomes.
The result is a widening gap between adoption and impact. AI is visible across organisations, but too often remains disconnected from the core processes that govern how work is carried out. This limits its ability to drive measurable improvements in efficiency, decision-making and service delivery.
Singapore at forefront of AI adoption
The Singapore public sector has transitioned from broad experimentation to the formulation of a mission-driven AI strategy that encompasses the entire government.
According to estimates, around 80 per cent of Singapore’s 150,000 public sector workers use AI tools in their workplace.
The Smart Nation 2.0 (SN2.0) policy document underpins Singapore’s policy framework for the government’s approach to AI.
The policy framework is governed by the risk-proportionate model under the National AI Strategy 2.0 (NAIS 2.0).
Singapore’s approach reflects a shift from experimentation to structured, operation-driven AI deployment, where AI is integrated into the day-to-day work of government, rather than treated as a separate layer of technology.
This aligns with Appian’s view that AI agents should operate within workflows that are transparent, adjustable, and auditable.
This is critical in government environments, where policies and regulations frequently change, and processes must be updated quickly to ensure outcomes remain compliant and aligned to public requirements.
Guardrails do not slow AI; they make it usable at scale
With AI operating within defined guardrails, agencies can scale up with confidence and manage risk proactively.
This is possible because staff trust the system, and leaders can defend decisions through clear, auditable trails.
Process-based guardrails that provide role-based access, transparent decision trails, and human-in-the-loop approvals make innovation sustainable and usable on a scale.
Again, this approach mirrors the Singapore government’s AI guardrails policy that is structured as a "dual shield" that integrates safety throughout both the development and operational lifecycles.
Retaining control
When AI operates through governed processes, leaders retain control over how decisions are made and can adjust or halt automation as policies, laws, or conditions change.
Where the AI model lacks sufficient confidence, the system automatically defaults to predefined, rule-based actions, such as routing the work to a human reviewer to ensure outcomes remain safe, predictable, and defensible.
This is particularly critical in high-pressure environments like public safety, health, human services, and procurement.
Across sectors, though, the common theme is the same – the importance of embedding AI within governed processes to improve speed, accuracy and outcomes.
Appian’s process-first approach is already delivering measurable impact in supporting the modernisation efforts of organisations which work closely with the public sector.
A good example is our work with Acclaim Autism, a multi-state behavioural healthcare company headquartered in Philadelphia, in the US.
The company faced the challenge of getting care to patients in the behavioural health field, which is traditionally held back by slow processes due to regulations, communication, insurance verification, and manual intake forms.
Process-led AI in action
Acclaim Autism built a custom onboarding solution on the Appian Platform since off-the-shelf options did not meet their needs.
The solution incorporates AI into the organisation’s patient onboarding platform to process medical documents.
This reduced insurance rejection rates from 80 per cent to five per cent, and patient wait times from six months to under a week, which freed the staff to focus on patient care delivery.
Another example is the Texas Department of Public Safety, which has been looking for a better way to deal with a recurring problem: answering procurement questions quickly without interrupting the work of the procurement team.
The problem was addressed by using the Appian Platform and AI to modernise and streamline the procurement process. The new system searches through the department’s proprietary knowledge base and provides accurate answers in seconds.
The University of South Florida has been able to transform the student experience with a fully mobile solution with the help of Appian.
The university’s Archivum, a mobile application that provides access to the student–academic advisor relationship data and students’ courses of study on their mobile devices have been developed on the Appian Platform.
Deployed in just three weeks, the application reduced the registration and advising process from two weeks to two days.
From potential to measurable results
Today, government leaders are no longer asking whether to use AI. Rather, they are now focused on how to integrate AI in a way that brings real value to their organisation.
The best way forward is to define organisational processes first and then determine where AI could add value to these workflows.
At the same time, there is a need to build governance and guardrails into workflows right from the start.
One of the most important aspects is measuring success in terms of public outcomes, not technical novelty.
This requires a shift in how success is measured, from technical capability or experimentation to real improvements in service delivery, efficiency and public outcomes.
While AI is powerful, in the public sector context, it is a process that turns that potential into measurable outcomes.
----------------
The author is Regional Vice President - Asia, Appian