Beyond explainability: Why AI trust depends on governance, not perfect visibility
Oleh Thoughtworks
Trust in high-stakes AI will not come from perfect model explainability alone, but from the ability to govern the systems around AI, including how they are tested, monitored, challenged and held accountable.

Milipol TechX in Singapore explored a question that is becoming central to the next phase of AI adoption: As models become more powerful, more embedded and less transparent, what will it take for governments and organisations to trust them in the moments that matter? Image: Canva
At Milipol TechX in Singapore, the panel session “AI Transparency & Explainability: Can We Trust the Black Box?” explored a question that is becoming central to the next phase of AI adoption.
As models become more powerful, more embedded and less transparent, what will it take for governments and organisations to trust them in the moments that matter?
The session featured leaders from Stanford Institute for Human-Centered Artificial Intelligence, Resaro.ai, National University of Singapore, Thoughtworks and Temus sharing their perspectives.
Across the discussion, one theme stood out: trust in high-stakes artificial intelligence (AI) will not come from perfect model explainability alone. It will come from the ability to govern the systems around AI, including how they are tested, monitored, challenged and held accountable.
This marks an important shift in how leaders need to think about AI adoption. The question is no longer only whether a model can explain itself.
It is whether the institution deploying it can understand its boundaries, manage its risks and stand behind its outcomes.
The limits of explainability
As AI models become more capable, they are also becoming harder to interpret.
For governments and regulated sectors, this creates a practical challenge: should AI systems be held back until every decision can be fully explained, or should the focus shift to making their use more transparent, bounded and accountable?
The panel cautioned against treating model explainability as the only route to trust.
In many cases, the more useful question is not whether every internal calculation can be made visible, but whether people can understand how the system is intended to work, what influences its outputs, where its limits are and how decisions can be challenged or remediated.
This is where transparency becomes more practical than perfect explanation.
Citizens may not need to understand every technical feature of a model, but they do need to know when AI is being used, what it is being used for, where its boundaries are and what recourse exists when outcomes are wrong or harmful.
For those deploying AI, this reframes explainability as part of a broader trust architecture. It is no longer enough to ask whether the model can be interpreted.
Leaders need to ask whether the overall system is visible, testable and accountable enough to be used in consequential settings.
Governance has to be engineered
The discussion also pointed to a deeper challenge: many organisations are still treating AI governance as a policy exercise, when it increasingly needs to become an engineering discipline.
AI principles matter, but principles alone do not make systems safe. For governance to work in practice, it has to be embedded into the way AI systems are designed, built, tested and operated.
That means combining technical controls such as test harnesses, audit trails, policy as code and system constraints with human oversight that allows users, operators and regulators to understand how the system is behaving and where risks may emerge.
This becomes even more important as AI systems become more agentic. When systems can invoke tools, access data, act across workflows and influence decisions, auditability must extend beyond the final output.
Teams need visibility into permissions granted, tools used, data accessed and checkpoints crossed.
The shift now is from ethics as intent to assurance as evidence. Many institutions already have AI principles. Far fewer have the mechanisms to prove whether those principles are being followed in live systems.
The next frontier of responsible AI will be the ability to show, continuously and credibly, that systems operate within the boundaries they were designed for.
Accountability is the trust mechanism
In high-risk AI, accountability cannot be outsourced to the model. “The AI did it” is not an acceptable answer when decisions affect citizens, patients, customers or frontline operations.
This is where governance becomes more than a compliance requirement. It becomes the foundation for institutional confidence. A clear chain of responsibility is needed around AI-enabled outcomes so that when something goes wrong, responsibility sits with parties who can act, explain and be held to account.
For agencies deploying AI in mission-critical systems, the practical test is whether they can defend the decision when the stakes are real. Not whether the AI system is impressive in a pilot. Not whether it can generate a plausible explanation after the fact. But whether the institution deploying it can stand behind the outcome.
That also means testing cannot be a one-time exercise before deployment. Models evolve, data shifts, usage patterns change and new risks emerge. AI systems need to be tested in the conditions where they will operate, with validation refreshed as the system and context evolve.
From trusted models to trusted systems
The black box problem is not going away. If anything, AI models are likely to become more capable, more embedded and less intuitively explainable.
For governments and regulated sectors, this should not be seen only as a constraint. It is also an opportunity to raise the standard for how AI is deployed. Those that move fastest and most responsibly will not be waiting for perfect interpretability.
They will be building systems that can be tested in context, monitored in operation, audited after the fact and challenged when outcomes go wrong.
That is where the next phase of AI trust will be built: not in the model alone, but in the governance engineered around it.
-------------------------------------------------------------
About Thoughtworks:
At Thoughtworks, we help governments and regulated organisations build the platforms, practices and governance foundations needed to move from AI ambition to trusted, accountable adoption.