How public sector agencies can lay the foundations for AI agents
By Workato
Clear objectives, strong security, and data readiness are some essential steps required to tap into the transformative potential of Agentic AI for governments and citizen-oriented services, says Workato’s Chief of AI Products & Solutions, Bhaskar Roy.

Identiying use cases and openness to change are some foundational steps that public sector agencies need to establish for the widespread deployment of agentic AI. Image: Canva.
The rapid pace at which agentic artificial intelligence (AI) is growing can be overwhelming.
Different use cases, concerns about deployment, and new competencies to manage agentic AI are some of the worries that slow down the adoption of this new AI application in the public sector.
The question is, do the advantages of AI agents outweigh the concerns?
To effectively reap the benefits of AI Agents, public agencies must identify pain points where the agents can improve workflows while empowering the workforce to be prepared to collaborate with this technology rather than be replaced by it, shares Workato’s Chief of AI Products & Solutions, Bhaskar Roy.
AI agents at the citizens’ service
The ideal experience for a citizen navigating public services includes fast, personalised, and reliable responses to queries, but if every citizen is asking for the same thing, how can public officers ensure the same efficiency?
This is where agentic AI can potentially lend a helping hand.
Roy shares that citizen support is one of the key areas in which AI agents can empower the public sector to provide answers tailored to the individual in record time, without compromising time on hold or commuting to physical offices. This adds to its accessibility appeal.
Giving an example from the US, Roy says: “On a government website, if you put some kind of agent in there with the action capability to identify the user, knowing their social security number, [the AI agent] can answer questions like ‘what was my tax return last year?’ Those kinds of things that are being handled through a support staff in the back, now AI agents are capable of handling.”
He adds that such a functional model would require the appropriate permissions granted for the agent to fetch the necessary citizen information to formulate and deliver the prompted answers. This way, the agent can learn which user it is speaking to, what they need, and act accordingly.
To subscribe to the GovInsider bulletin click here.
Amplifying use cases
Another brick to lay for public sector agencies to make the best out of agentic AI is to identify which processes can help to deploy new technologies, given the existing technical expertise within departments.
Low-code/no-code platforms, for example, can democratise AI deployment, says Roy.
What this means is that by successfully rolling out a use case with a low-code/no-code platform, organisations can see that it is possible to obtain results with AI agents and expand their use to other departments, without requiring extensive technical expertise.
Sharing an example, Roy notes that a sales team rolled out a CPQ (configure, price, quote) agent that reduced the time to generate sales quotes from 40 to three minutes. Upon its success, the organisation “wants to roll out more agents, and they are starting to democratise the technology for various teams.”
Once public agencies identify the use case for AI agents according to their needs and capabilities, it is easier to expand to other potential uses or deployments, explains Roy.
He believes that democratisation in the AI space will gain strength in the following year, allowing organisations to iterate on these solutions across different departments and relieve them from mundane tasks that can be automated.
Security must be top of mind

Having security measures in place is another foundational requirement for public sector agencies.
Centralised governance is a way in which organisations can manage permissions, such as what privileges the agents have, to understand how they work within the enterprise.
“For AI to work, you must provide it with all the necessary information. So you must put in place all the possible guardrails to make sure that it does not go rogue,” Roy notes.
He adds that data access is crucial to manage through clear permission models that agents must follow when accessing data on a user’s behalf, such as just-in-time measures or agent authentication, which directs agents to perform a specific set of clearly defined actions after authentication.
Additionally, having a prepared workforce that manages AI in organisations can enhance the security of processes for both internal transformation and product transformation using AI, Roy shares.
Openness to change
Understanding the capabilities and permissions agents have can be challenging for employees. It is important for organisations to adopt a mindset shift to trust agents in the work they perform.
“It’s almost like a user behaviour change,” says Roy. “They have to be comfortable with AI agents doing the work for them, so that’s a big adoption hurdle.”
One of the products that Workato is working on aims to ease that hurdle by integrating records from different systems into a simple conversational user interface powered by agents in the background.
“We hope that it changes that notion of adoption because you’re not having to go to multiple systems and trying to figure out how AI works across all of [them]. It becomes more like a home page for your own personal use or use within your work,” Roy shares.
He adds that by integrating AI into a more simplified application that “meets the users where they are”, the expectation is to enable a more positive approach toward AI agents in the workplace.
With a solid groundwork in place, the benefits of agentic AI in the public sector can well outweigh the concerns.