How agentic AI transforms platform operations
Oleh Dynatrace
Ahead of the Innovate Roadshow in Singapore, Dynatrace’s APJ VP & CTO Rafi Katanasho highlights that public sector IT is moving from reactive ‘screen watching’ to defining strategic guardrails and improving service quality.

At Dynatrace, agentic AI is transforming how its customers can move from traditional monitoring to autonomous operations. Image: Canva
For years, the mandate for public sector IT teams has been to keep the lights on.
But as government’s digital services grow increasingly complex across multi-cloud and hybrid environments, the scale of modern infrastructure is outpacing human capacity.
“Organisations are drowning in data but starving for action,” says Dynatrace’s Asia-Pacific and Japan's Vice President and Chief Technology Officer Rafi Katanasho.
At Dynatrace, agentic artificial intelligence (AI) is transforming how its customers can move from traditional monitoring to autonomous operations.
What this means is that the platform handles the detection-to-resolution chain, and IT teams can now focus on what really matters: Setting the policies, defining the guardrails, and improving citizen services.
Beyond traditional automation
To understand this shift, one must distinguish between traditional monitoring, AIOps and autonomous operations.
"Traditional monitoring tells you the building is on fire," Katanasho explains. "AIOps tells you which floor, which room, and what started it."
But agentic AI goes further - it calls the fire brigade, evacuates affected floors, reroutes traffic through alternative exits, and files the incident report, all before you have finished reading the alert.
While traditional automation is rigid and follows the "if this then that" playbook, he notes that agentic AI systems independently assess a situation, reason through options, coordinate with other agents, and take contextually appropriate action in real time.
Katanasho assures that the AI agents on the platform act within the policy boundaries defined by the agency, providing full traceability of every action taken.
"You are not handing over control. You are extending your team's capacity to act," he says.
Blueprint for autonomy
Katanasho highlights three core innovations that allow Dynatrace to build a platform that doesn’t just observe, but understands and acts.
The first is unified data foundations. Bringing metrics, logs, traces, events, security signals and business data into a single store with real-time mapping creates a “live digital twin,” he explains.
The second is fusing deterministic AI with agentic AI. While deterministic AI establishes the causal ground truth, agentic AI reasons and acts within that grounded context.
He emphasises the importance of maximising determinism before any generative AI (GenAI) or agentic AI enters the workflow.
The reason is mathematical: a single large language model operating at 95 per cent accuracy sounds acceptable.
But when ten sequential agent calls are chained together, the error rate compounds and effective accuracy can drop toward 60 per cent.
In complex government environments, that gap is the difference between operational resilience and cascading failure.
“And when agentic AI does enter, it reasons about verified facts, actual causal chains, real dependency maps, confirmed impact assessments, not probabilistic guesses,” he adds.
The benchmark results support this approach: problems are solved up to twelve times more often, three times faster, and at half the cost when deterministic agents establish context before GenAI enters the workflow.
The third is a governed orchestration. By using open standards like the Model Context Protocol (MCP), AI agents can coordinate across different cloud platforms and tools without vendor lock-in.
“The result is a platform that does not just observe, it understands, decides, and acts. And critically, it does so with the explainability and auditability that government agencies and regulated industries require,” he says.
Meaningful human control
As Singapore becomes the first nation to publish a comprehensive governance framework specifically designed for autonomous AI systems, the question of human oversight remains paramount.
The Model AI Governance Framework for Agentic AI, announced by Minister Josephine Teo at the World Economic Forum (WEF) in January 2026, establishes four governance dimensions.
These are bounding risks upfront, ensuring meaningful human accountability, implementing technical controls throughout the agent lifecycle, and enabling end-user responsibility through transparency.
"These dimensions align directly with how we have architected the platform," Katanasho says.
"When IMDA emphasises that humans must remain ultimately accountable for agent actions, they are articulating in policy terms what we have been building in architectural terms," he adds.
In his view, the line should be drawn based on blast radius and reversibility, not on the sophistication of the action.
For low-risk actions like auto-scaling infrastructure to respond to a predictable demand spike (e.g. tax filing deadline), agents should act autonomously within pre-defined policy boundaries.
For high-stakes changes like citizen identity services or healthcare platforms, the AI should justify its recommendations with full causal context and wait for human approval.
“You are not eliminating the expert, you are liberating them,” Katanasho notes. He adds that autonomous operations also support non-specialists.
When AI can explain a technical bottleneck in plain language, a generalist officer can understand the operational health and make informed decisions without needing deep infrastructure expertise, he adds.
Real-world gains
To illustrate the impact of this shift, Katanasho points to Macquarie Bank, which manages over 6,000 services with 1,500 engineers and zero tolerance for outages.
By using AI agents for automated diagnostics and as verification gatekeepers, they reduced triage times to minutes, reduced critical incidents by 59 per cent, and achieved 99.98 per cent availability for their services.
Katanasho uses this to illustrate how the conversation changes for leadership.
"When an outage occurs, the conversation with your audit team shifts from explaining a three-hour outage to reporting how a potential impact on 200,000 citizens was identified and prevented before they noticed," he observes.
Another visible gain is also reducing the tool sprawl that plagues most IT environments.
He cites United Airlines who consolidated fragmented monitoring tools and achieved end-to-end observability across 800 applications in nine months.
Consolidating these into a single platform allows agencies to retire redundant tools and free engineering capacity for innovation, he notes.
First steps for public agencies
1. Start with unified observability
Consolidate fragmented tools into a single platform to see the environment more holistically.
Highlighting this as an “essential prerequisite,” he notes that doing this delivers immediate value in reduced tool costs, faster incident resolution, and better cross-team collaboration even before any agentic capabilities are activated.
2. Establish a causal baseline
Map causal dependencies across your entire environment, so that the platform understands how every service, database, and application programming interface (API) relates to one another.
“This is the deterministic foundation that makes agentic AI trustworthy. Without it, any automation you build is guessing,” he says.
3. Identify high-toil and low-risk workflows
Your first candidates for supervised automation should be tasks like routine diagnostics where the risk is bounded.
Supervised automation implies to “let agents propose actions, have humans approve them, and measure the results. Build confidence with evidence,” he notes.
4. Align with governance early
For Singapore agencies, map approaches to frameworks like Singapore's IMDA model from the outset to accelerate production and avoid friction later.
In practice, this is defining the AI agent’s permission boundaries, establishing meaningful human checkpoints, implementing baseline testing protocols, and ensuring that the team understands the agent’s capabilities and limitations.
5. Invest in people
Retrain the operations teams to move from reactive monitoring to setting policies and interpreting AI-driven insights.
“The cultural shift, from humans doing the operational work to humans directing the intelligence that does the work, is ultimately what determines whether your agentic investment delivers its full value,” he notes.
Dynatrace’s Innovate Roadshow in Singapore is happening on July 22, 2026, at Sands Expo & Convention Centre, and is free to attend for all. Join the innovators shaping the future of observability for a day of in-depth learning and networking. Click here to learn more and register.
