Sovereignty in the age of AI: How Asia-Pacific countries can play a different game
By Hilda BarasaPeiChin Tay
AI sovereignty for governments should be understood as having the agency to influence outcomes, manage dependences and build resilience over time, rather than owning every layer of the AI stack.
-1770356937443.jpg)
Tony Blair Institute for Global Change (TBI)'s PeiChinTay and Hilda Barasa shares how different APAC governments are operationalising AI sovereignty. Image: TBI
As governments around the world accelerate efforts to harness artificial intelligence (AI), questions on AI sovereignty have moved from the margins of technology policy to the centre of economic and geopolitical strategy.
In much of Europe and North America, sovereignty is still treated primarily as a question of ownership and control: who owns the infrastructure, the models and the data on which AI systems depend.
This framing reflects earlier eras of industrial policy, and it misses a more profound challenge that AI now poses for governments.
AI is not a bounded national asset that can be secured through territorial control or domestic ownership alone. It is a globally distributed and deeply interdependent system, that is increasingly concentrated within a number of small firms and platforms.
In such a global technology ecosystem, the central question is no longer one of ownership alone but how states retain agency.
Asia-Pacific’s approach: Sovereignty through interdependence
Asia-Pacific enters this debate from a fundamentally different starting point.
The region’s economies are deeply embedded in global trade, technology and supply chains. For much of the region, sovereignty has never been equated with self-sufficiency.
It has been understood instead as the ability to operate strategically within interdependence, leveraging openness while managing exposure. In the age of AI, that distinction matters.
AI systems today are built across borders: compute may be sourced in one country, models trained in another, and applications deployed globally.
No single state controls the full stack.
For most Asia-Pacific economies, attempting to replicate this entire ecosystem domestically would not only be unrealistic, but counter-productive – risking the mis- allocation of scarce political, fiscal and institutional capital towards symbolic forms of control rather than the development of real capability.
For this reason, sovereignty is best understood as agency rather than autonomy.
What matters is the ability to make deliberate choices about how AI is accessed, governed and deployed.
Owning every layer of the AI stack matters less than being able to influence outcomes, manage dependencies and build resilience over time. States that retain these capacities will be better positioned to adapt as technologies shift and external conditions change.
Critically, access to frontier AI capabilities can strengthen, rather than erode, sovereignty. Exclusion from advanced models and compute is itself a strategic vulnerability.
States that cannot deploy frontier systems at scale risk falling behind in productivity growth, public service delivery and state capacity, regardless of how domestically “sovereign” their infrastructure may appear on paper.
In practice, autonomy without capability offers little protection.
How different economies operationalise AI sovereignty
Across the region, governments are already converging on a more pragmatic, layered approach to sovereignty.
Rather than pursuing uniform autonomy, they combine direct control, steering power and managed inter-dependence across different layers of the AI stack.
Fallback capacity is prioritised where continuity and trust matter most, while global systems are relying on where scale, speed and innovation are decisive. How this balance is struck varies from country to country.
- Highly-developed economies can exercise sovereignty primarily through governance strength and institutional resilience. Japan illustrates this through its investments in domestic compute capacity, open-weight Japanese-language models and credible AI governance frameworks which are designed to ensure continuity, trust and strategic autonomy in critical areas – not technological dominance. Crucially, these efforts sit alongside deep integration with allied AI ecosystems, reflecting a deliberate choice to remain embedded in global innovation networks rather than to insulate national systems from them.
- Large economies, by contrast, can convert scale into strategic leverage. India offers a clear example. Through digital public infrastructure, data governance frameworks and coordinated public demand, India is shaping how AI is deployed across sectors – without attempting to replicate frontier AI systems end-to-end. Their sovereignty is exercised through steering and aggregation: influencing market behavior, standards and adoption pathways, rather than isolation.
- Smaller Asia-Pacific economies face tighter constraints, but also distinct opportunities. For them, sovereignty is less a question of ownership than of optionality. By prioritising interoperability, diversified partnerships and regulatory credibility, these countries can preserve the ability to switch providers, hedge geopolitical risk and adapt systems as the AI landscape shifts. In a fast-moving technological environment, the capacity to pivot may prove more valuable than nominal control.
This perspective also highlights a common misconception in global policy debates where localisation is conflated with sovereignty.
“Sovereignty-as-a-service” arrangements – local data centers, nominally domestic cloud infrastructure or compliance-driven wrappers – can entrench dependency if states lack real exit options, bargaining power or the institutional capacity to govern systems strategically.
Without the ability to shape contracts, standards and long-term incentives, localisation risks creating the appearance of control without its substance.
Redefining sovereignty for the AI era
The lesson emerging from Asia-Pacific is a redefinition of what sovereignty means in practice. In the age of AI, sovereignty is not secured by insulating national systems from global technology, but by positioning strategically within it.
It requires governments to make explicit choices about where control is essential, where dependence is acceptable and where influence can be exercised through markets, norms and institutions.
Taken together, the region points toward a more durable model of AI sovereignty – one grounded in access, adoption and adaptability, rather than technological self-sufficiency.
By embracing managed inter-dependence and investing in sustained institutional capability, Asia-Pacific countries demonstrate that agency in the AI era is not achieved by playing the old game harder, but by redefining the game itself.
----------------------------------------------------------
PeiChin Tay and Hilda Barasa are the Senior Policy Advisors (Government Innovation) at the Tony Blair Institute for Global Change (TBI). TBI is a not-for-profit organisation that provides expert advice on strategy, policy and delivery, unlocking the power of technology across all three.
