What the Anthropic–Pentagon standoff means for the rest of us
By Mohamed Shareef
Across the Global North, the current ethics govern civilian AI, but national security gets a pass - signaling a pragmatic reality: if AI defines power, governments intend to control it.

Anthropic’s position of standing firm is significant. The more important question is what this signals for governments beyond Washington, says Mohamed Shareef. Image: Canva
A quiet but consequential confrontation is unfolding between a frontier AI company and the world’s most powerful military.
Reports indicate that US Defense Secretary Pete Hegseth has pressed Anthropic CEO Dario Amodei to relax the ethical constraints embedded in its Claude model for military applications, raising the possibility of contract consequences or the use of national security authorities.
Anthropic, for now, appears to be holding its position: no fully autonomous lethal weapons, and no mass domestic surveillance of US citizens. Those red lines still stand.
Governments and frontier AI companies have long been on a collision course over this question.
Security establishments seek capability with minimal constraint. AI firms, in contrast, have made public commitments about limits and safeguards. The tension was inevitable.
What is unusual here is that it is playing out in the open, and that the private sector is visibly pushing back.
Having spent years inside government, I know how rarely corporate principles survive when a powerful client decides they are inconvenient.
Anthropic’s position of standing firm is significant. The more important question is what this signals for governments beyond Washington.
Who sets the rules – and who is exempted?
The EU AI Act remains the most comprehensive civilian AI governance framework in operation. It mandates transparency, human oversight and bias testing – the core elements of responsible AI governance.
Yet military and national security applications are excluded almost entirely.
Europe has constructed a detailed values-driven regime for civilian AI while exempting some of the most consequential use cases.
The strategic logic is understandable. European leaders have argued that the region cannot afford to constrain itself while competitors accelerate. That tension is real.
But it also means that Europe’s vision of “trustworthy AI” contains a structural gap at its core.
Across Asia, the emphasis has been different. Taiwan, South Korea and Singapore have updated or enacted AI frameworks that position AI as strategic infrastructure to be cultivated, not primarily as a risk to be contained.
The focus is on capability-building, competitiveness and national resilience.
As in Europe, however, military applications sit largely outside civilian guardrails. The underlying message is pragmatic: if AI is shaping power, governments intend to shape AI.
The pattern is becoming clear. Across the Global North, ethics frameworks govern civilian AI, while national security applications are afforded wide latitude.
The Anthropic episode may be unusual in its visibility, but the underlying friction between state prerogative and corporate constraint is structural. It will recur.
Where sovereignty actually sits
Building national digital infrastructure teaches a lesson strategy documents rarely capture: sovereignty is layered.
Some decisions are genuinely yours. Others were made upstream, embedded in platforms, procurement frameworks or technical standards long before you arrived.
That experience shapes how this episode reads from smaller and emerging economies. For countries like the Maldives, and for much of the Global South, the central issue is not whether one company maintains its red lines today.
It is what happens when AI systems underpinning public services, financial infrastructure and communications are governed by frameworks we did not write.
Those frameworks can evolve. The conditions attached to them can shift. And the further a country sits from the centre of decision-making, the less influence it has over those shifts.
At the India AI Impact Summit, Prime Minister Narendra Modi argued that the aspirations of the Global South must sit at the centre of AI governance, and that safety frameworks should function as “glass boxes” rather than opaque systems.
The framing is instructive.
Governance is not only about what rules say, but whether those affected by them can see them, question them and meaningfully shape them.
At present, much of the Global South remains a consumer of AI infrastructure built elsewhere and governed by rules set elsewhere.
If those systems become further integrated into military or national security architectures, the downstream implications will also be shaped elsewhere.
That is a strategic dependency governments need to examine with urgency.
Dialogue is not the same as obligation
UN Secretary-General António Guterres has been unequivocal: humanity’s fate cannot be left to an algorithm, and humans must retain authority over life-and-death decisions. He has called for a binding international instrument on lethal autonomous weapons.
Advisory bodies have been established. Dialogues are ongoing.
There remains, however, a significant gap between dialogue and binding obligation. Most current efforts sit on the dialogue side of that divide.
Experience in small island states suggests caution: frameworks designed in one context and applied universally often embed the assumptions of their origin. Those assumptions may not surface until systems are operational and difficult to reverse.
A binding treaty negotiated primarily among major powers risks repeating that pattern. It may produce rules, but not legitimacy.
And governance frameworks lacking broad legitimacy tend to fracture under geopolitical strain.
The window is not indefinite. In the absence of binding agreement, operational norms are being set through procurement decisions, military doctrines and contracts with private AI firms.
Over time, those norms harden into default practice. Retrofitting international law to constrain established practice is far more difficult than shaping it early.
A strategic imperative for the Global South
At its core, this is a decision about authority. Governments are determining whether humans retain meaningful control over lethal force, or whether that authority is delegated to systems operating at machine speed.
This is not merely a technical debate. It is a governance decision of historic consequence.
AI governance cannot remain a downstream technical matter delegated solely to communications or innovation ministries. It is a sovereignty issue. It intersects with foreign policy, defence strategy, trade negotiations and public procurement.
Governments that treat it narrowly risk ceding leverage in domains far beyond technology.
More concretely, the moment may be right for a Digital Non-Aligned coalition: a
structured grouping of states, potentially beginning with ASEAN members, small island developing states and African Union partners, committed to three objectives.
First, developing shared AI governance standards.
Second, pooling access to compute and technical expertise.
Third, presenting coordinated positions in multilateral negotiations on autonomous weapons and cross-border data governance.
This would not be anti-West or anti-China. It would be pro-rule-setting capacity – ensuring that emerging economies are not merely rule-takers in a domain that will shape their long-term security and prosperity.
Alongside such a coalition, a Sovereign AI Capacity Fund, modelled in part on climate finance mechanisms, could help smaller states build or access AI infrastructure without inheriting governance conditions embedded in external partnerships.
Dependency is rarely the product of preference alone. More often, it reflects constrained alternatives.
The red lines appear to be holding today. Whether they endure will depend on decisions made in rooms few countries are invited into.
That is precisely why more governments need to act before the architecture is fixed – collectively and deliberately.
Read also: Lessons for Asia from Europe's digital sovereignty offensive, Feb 12, 2026
-----------------------------------------------------
Mohamed Shareef is a former Minister of State for Environment, Climate Change and Technology in the Maldives (2021-2023). He previously served as Permanent Secretary of Science and Technology Ministry (2019-2021) and the Chief Information Officer at the National Centre for Information Technology (2009-2014) and led the development of the country's national digital public infrastructure. He also served in the academia including as a researcher at the United Nations University. He currently serves as Senior Advisor for Digital Transformation at Nexia Maldives.