Clearing the air about agentic AI in government

Oleh Amit Roy Choudhury

GovTech Singapore’s AI Practice Group publishes a primer outlining how agentic AI and AI agents could be used to enhance public sector operations.

GovTech Singapore’s AI Practice Group’s Agentic AI primer demystifies and explains both the advantages as well as disadvantages of using these AI agents in the public sector. It concluded that, used correctly, these agents had tremendous potential in government services. Image: Canva. 

From large-language models (LLMs) to generative artificial intelligence (GenAI) and now agentic AI, these buzzwords are flooding the social walls and coffee chats in today’s workplace – but does anyone understand the differences between them? 


GovTech Singapore’s Lead Data Scientist, AI Capability Development, Waston Chua, wrote in a LinkedIn post: “The term agentic AI is everywhere, yet it often means different things to different people”.  


He added that some think it [agentic AI] is just about prompting an LLM to use tools, while others imagine a network of LLMs talking to each other.  


Chua observed that AI agents hold great promise for public service, but without a clear and shared understanding of what they are and how they can be used, it was difficult to ensure meaningful adoption. 


While “AI agents” refers to a specific application of agentic AI, “Agentic AI” refers to the AI models, algorithms and methods that make them work, according to a Forbes article


To subscribe to the GovInsider bulletin click here.


To improve understanding, Chua and three colleagues from GovTech Singapore’s AI Practice Group developed the Agentic AI Primer to demystify AI agents and establish a common vocabulary for AI developers and stakeholders. 


The primer is intended to provide guidance to the Singapore public sector on when and how to use agentic AI, referencing best practices and industry use cases. 

AI agents as government officers 


AI agents are autonomous intelligent entities that perform specific tasks without the need for continuous human intervention, the document said.  


These included the ability to perceive their environment through sensors or data inputs, reason about the information they gather, take actions based on their understanding and learn from feedback and experience to improve their performance over time.  


This allowed them to pursue goals, complete tasks, and adapt to new information in real-time, the document observed. 


Sharing some use cases in the public sector, the document noted that the data science team from GovTech's AI Practice (forward deployed to SkillsFuture Singapore) set up a multi-agent system prototype to get deeper insights from customer relationship management (CRM) data. 
 
This was achieved by modelling AI agents as government officers each working on their specialised tasks. 


The setup consisted of two categories of agents - business agents and data agents - which performed specialised tasks and were sometimes assisted by tools like code executors and graph knowledge base query engines. 


As an example, depending on the nature of the query, the “business analyst” would choose either the “data analyst” or the “graph analyst” (or in some cases, both) to retrieve and analyse the data relevant to the query. 


Using this method, the team was able to glean contextualised and human-validated insights, which would typically involve human intervention. 

Public service automation  


The document added that the Singapore government could leverage AI agents to enhance public services across a wider range of domains.  


One suggested area was public service automation and improvement by handling routine inquiries from citizens.  


As an example, an agent could autonomously answer queries, or even execute instructions on permit applications, tax payments, or social service requests - freeing up human officers to focus on more complex cases. 


Another suggested use case was using agentic AI-enabled chatbots to perform transactions like filing taxes, applying for grants and performing CPF (pension fund) withdrawals through a chat interface.  


This could be done by enhancing existing chat assistants like VICA and AIBots with agentic capabilities to perform real-time support and transactions, the report noted.  


Within the government, agents could be used to enhance current career advancement platforms like Career Kaki to better match job seekers with suitable positions based on their skills and preferences.  


Agentic AI can also support policy analysis. AI agents could be built to identify relevant datasets, analyse trends, and generate reports for policy officers thus providing policymakers with better insights into the impact of government policies and helping them make more informed decisions. 


Agentic systems could also be designed to probe vulnerabilities in existing systems or perform safety testing on LLM-powered applications to enhance application safety, the report added.  


Multi-agent systems could assist citizens unfamiliar with legal processes by extracting information from legal documents and providing personalised guidance.  

Major challenges 


The report observed that despite obvious benefits, there were challenges in the use of agentic AI in public services. 


One of the major ones was determining the appropriate level of complexity for an agent. Over-engineering could lead to unnecessarily complex and inefficient systems, while under-engineering could limit the agent's ability to solve complex problems.  


Outlining a potential security concern, the report noted that agents are at risk of methods which subvert their intended purpose.  


Not only were they susceptible to prompt injection attacks, they could also execute code (using code interpreters or APIs) and pose a significant security risk if the code execution is not carefully controlled. 


Another concern highlighted was that agents might make mistakes when dealing with sensitive operations and insufficient control over the agent's actions and permissions could lead to unauthorised access to sensitive data or system resources. 


Security measures must be implemented to protect against these attacks and mistakes, the report stressed. 


GovTech’s AI Practice Group added that despite the potential dangers, AI agents “offer a transformative approach for leveraging the capabilities of LLMs in complex and dynamic government environments”.