Why cybersecurity is the foundation for AI trust in organisations

Oleh ST Engineering

At this year’s GovWare 2025, amid discussions on quantum threats and AI-powered defences, one message rose above the technical noise: Trust.

Goh Eng Choon, President of ST Engineering’s Cyber business, was the keynote speaker at GovWare 2025. Image: ST Engineering

Goh Eng Choon, President of ST Engineering’s Cyber business, said that as organisations rush to harness AI, cybersecurity must evolve from a layer of protection into the foundation of public confidence.

 

Goh was invited to speak at the GovWare 2025 keynote session, as well as on a high-level panel during Singapore International Cyber Week (October 21–23), where he shared insights on the future of digital defence in an AI era.

 

Supported by the Cyber Security Agency of Singapore, GovWare is recognised as Asia's premier cybersecurity event.

 

“If cyberspace is the new frontier,” he said, “then artificial intelligence (AI) may well be our starships - sleek, fast, and powerful.” But in this vast digital frontier, he added, “the captain’s chair must always belong to the cyber defenders.”

 

Delivering his keynote, Humans at the Core: Securing the Future of AI in a Cyber World”, Goh cautioned that while AI promises shared progress and inspires hope, it has also created an “unyielding fault line” - a growing scarcity of trust.

 

In an era defined by AI-powered misinformation, data breaches, and machine-speed cyberattacks, he made a simple but powerful case: “Cybersecurity is more than protection; it is the framework through which trust can be established and maintained.”

 

Cybersecurity, in his view, is the bridge between humans and AI, the mechanism “that allows humans to interact with AI systems with confidence, knowing that the data we feed, the decisions we make, and the insights we receive are secure, auditable, and accountable,” he explained.

 

Later, in the high-level panel Survival of the Smartest: Agentic AI in Cybersecurity Operations, Goh explored how organisations can steer the rise of agentic AI.

 

His takeaway was succinct: pace, prepare, and govern.

 

By introducing AI gradually, training people extensively, and enforcing accountability, organisations can turn cybersecurity from a back-end safeguard into the bedrock of trust.

 

To subscribe to the GovInsider bulletin, click here

1. Paced AI implementation to match human readiness

 

Building trust in AI, Goh explained, is about aligning human understanding with technological capability; it’s not about accelerating deployment.

 

Organisations should adopt AI with a deliberate “crawl, walk, run” approach, ensuring that human readiness and understanding grow in tandem with advancing AI capabilities.

 

Goh compared the integration of autonomous, agentic AI to that of learning how to drive.

 

“You don’t tell an AI agent to drive the way humans do because humans can be reckless and make mistakes on the road,” he noted. Instead, organisations must teach AI to follow the safest and most rigorous models, ensuring consistency, safety, and control.

 

This deliberately paced implementation prevents fear, avoids overwhelming human workers, and creates a controlled environment where trust can be built on safe and predictable interactions with the AI, rather than abrupt automation.

2. Mandate tailored training across the AI ecosystem

 

Technology alone cannot sustain trust; trust is built by people. Goh emphasised that every role in the AI lifecycle - from developers to system owners to end-users - must be trained to understand not only AI’s potential but also its risks and limitations.

 

For developers, the training should focus on understanding adversarial AI threats, from poisoned data and hijacked automated pipelines to manipulated prompts that leak sensitive information. It should emphasise learning how to test, validate, and secure models before they are shipped.

 

For system owners, the training should focus on the potential risks and implement mechanisms to mitigate them when they detect their AI output has been tampered with.

 

For end users, they serve as the “additional line of defence” and the “check and balance against the implementation of agentic AI”. They can help identify issues with AI systems that could potentially result from spoofing or manipulation.

 

“When teams are properly trained, they move from being passive users to active security components,” Goh said.

 

To support this vision, ST Engineering has developed AGIL® SecureAI, designed to strengthen the security and safety robustness of AI and generative AI systems through continuous assessment, monitoring, and risk mitigation.

 

His message was clear: trust in AI grows when the human ecosystem matures alongside the technology itself. This allowed both the human and the AI system to be used effectively and in conditions that complement each other, he highlighted.

3. Keep human-in-the-loop for accountability and ethics

 

As agentic AI use matures in organisations, maintaining human control would become a non-negotiable for building public trust and ensuring accountability.

 

Increasingly, AI systems can perform sophisticated tasks at scale. However, human oversight would be crucial in ensuring accuracy, preventing errors from escalating, and providing ethical judgment in high-risk scenarios.

 

Goh shared how ST Engineering embeds the human-in-the-loop principle in its Agentic AI Security Operations Centres (SOCs).

 

“AI-generated alerts, recommendations, or automated actions are reviewed and validated by experienced analysts, who can also query the AI to explain its reasoning in plain language,” he explained.

 

He highlighted that such principles do not merely serve as a safeguard but also create "a continuous feedback loop where human insight fine-tunes the AI’s models.”

 

“The result is an autonomous SOC that acts at machine speed when seconds matter, yet keeps humans firmly in control,” he said, emphasising that this combines both the scale and precision of AI alongside the judgement and accountability of humans.

Trust at the helm

 

Goh’s keynote and his panel discussion echoed a consistent theme: responsible AI in an organisation must be guided, not left to run free.

 

The future of AI-driven governance depends on cybersecurity not just as a defence mechanism, but as an architecture of trust: one that links people, systems, and institutions together through security, transparency, and accountability.

 

In Goh’s words, “We are not just cyber defenders; we are stewards of trust itself.”

 

Much like the Sci-Fi metaphor he used, “AI may be the starship - sleek, fast, and powerful - but in organisations, the captain’s chair must always remain human.”