What Indonesia and Malaysia's Grok ban teaches Small States about AI platform governance
By Mohamed Shareef
Indonesia and Malaysia have become the first countries to block access to Grok, the AI chatbot integrated into Elon Musk's social media platform X.
-1768278751000.jpg)
The speed of action offers important lessons for policymakers across Asia Pacific, particularly those in smaller states grappling with how to govern AI platforms developed and operated far beyond their borders, says Mohamed Shareef. Image: Canva
The temporary restrictions, imposed over a single weekend, came after regulators found the platform was being used to generate non-consensual deepfake images of women and children.
The speed of action offers important lessons for policymakers across Asia Pacific,
particularly those in smaller states grappling with how to govern AI platforms developed and operated far beyond their borders.
What the regulators did
Indonesia's Communications and Digital Affairs Ministry imposed a temporary block on Saturday, followed by Malaysia's Communications and Multimedia Commission on Sunday.
Indonesia's Communications Minister Meutya Hafid framed the decision in human rights terms, stating that non-consensual sexual deepfakes represent "a serious violation of human rights, dignity and the safety of citizens in the digital space."
Malaysia's regulator cited "repeated misuse" of Grok to generate obscene and sexually explicit images, including content involving minors.
Notices issued to X Corp and xAI demanding stronger safeguards had drawn responses that relied mainly on user reporting mechanisms, which regulators deemed insufficient.
Both countries imposed temporary restrictions while legal and regulatory processes continue, with access to remain blocked until effective safeguards are demonstrated.
Notably, both Indonesia and Malaysia are Muslim-majority countries with existing anti- pornography laws, which provided additional legal grounding for swift action.
To subscribe to the GovInsider bulletin, click here.
The global response has been slower
The EU, UK, France, India, and Australia have all expressed concern or opened inquiries, but none have restricted access.
The European Commission ordered X to preserve all Grok-related documents until the end of 2026 and called the generated images "unlawful" and "appalling."
The UK’s Ofcom made urgent contact with X and xAI. India's IT Ministry issued a 72-hour ultimatum after finding X's initial response unsatisfactory. France referred cases to prosecutors.
Yet Indonesia and Malaysia remain the only countries to take direct platform-level action.
Why this matters for Small Island Developing States
For Small Island Developing States (SIDS) across the Indian Ocean, Pacific, and Caribbean, this situation exposes a structural challenge in digital governance.
The Maldives, for example, has a population of 500,000 spread across 1,200 islands. The entire nation is smaller than the workforce of some technology companies.
Indonesia has 275 million people; Malaysia has 34 million. Both have sufficient market presence to command platform attention. What options do smaller states have when platforms fail to implement adequate safeguards?
The answer lies not in accepting limited leverage, but in rethinking how small states approach AI platform governance.
Five policy considerations for small state regulators
1. Regional coordination mechanisms
Indonesia and Malaysia acted within days of each other. SIDS should explore similar
rapid-response coordination through existing bodies like AOSIS (Alliance of Small Island States) or regional groupings.
A coalition of 39 SIDS speaking with one voice carries regulatory weight that individual nations cannot generate alone.
The Digital Forum of Small States (DFOSS) has already begun convening ministerial discussions on digital governance. These forums could be strengthened to enable coordinated responses to platform failures.
2. Legal framework readiness
Both Indonesia and Malaysia had existing legal instruments that enabled swift action: Indonesia's Electronic Information and Transactions Law and Malaysia's Communications and Multimedia Act 1998.
Small states should assess whether current legal architectures can respond to AI-generated harms with similar speed.
Key questions include: Can regulators issue binding directives to platforms? What enforcement mechanisms exist for non-compliant foreign entities? Are there clear legal definitions covering AI-generated synthetic media?
3. Digital Public Infrastructure as a governance layer
Countries building national digital infrastructure have an opportunity to embed AI safeguards at the platform level.
The Maldives' national digital public infrastructure, including digital identity, payments, and citizen services platforms, represents the kind of foundational systems where identity verification requirements for high-risk AI applications could be implemented.
This approach addresses harms at the infrastructure layer rather than relying solely on platform cooperation.
4. Participation in international standard-setting
The EU's Digital Services Act and the UK's Online Safety Act are creating regulatory precedents that will shape global AI governance.
Small states should actively participate in forums like the ITU's AI for Good initiative and the upcoming WSIS+20 discussions to ensure their perspectives inform emerging frameworks.
UNESCO's recent assessment found that 50 per cent of SIDS have no official AI initiatives. Building institutional capacity to engage in these discussions is foundational work.
5. Public awareness as regulatory infrastructure
Effective AI governance requires citizens who understand both the capabilities and risks of generative AI.
Digital literacy programmes that specifically address synthetic media, deepfakes, and AI-generated content create an informed public that can support regulatory action and identify harms early.
The sovereignty question
Indonesia's Director General of Digital Space Supervision, Alexander Sabar, noted that initial findings showed Grok "lacks effective safeguards to stop users from creating and distributing pornographic content based on real photos of Indonesian residents."
This framing matters. It positions the issue not as content moderation, but as protection of citizens from a foreign platform's technical failures.
For small states, this sovereignty lens may prove more effective than attempting to match the regulatory capacity of larger jurisdictions.
The gap between technological capability and regulatory capacity widens every day that action is delayed. Indonesia and Malaysia have demonstrated that mid-sized states can act decisively when platforms fail their citizens.
The question for smaller states is how to build the coalitions, legal frameworks, and institutional capacity to do the same.
The Grok incident will not be the last. How small states prepare now will determine whether they respond from strength or scramble to catch up.
-----------------------------------------------------
Mohamed Shareef is a former Minister of State for Environment, Climate Change and Technology in the Maldives (2021-2023). He previously served as Permanent Secretary of Science and Technology Ministry (2019-2021) and the Chief Information Officer at the National Centre for Information Technology (2009-2014) and led the development of the country's national digital public infrastructure. He also served in the academia including as a researcher at the United Nations University. He currently serves as Senior Advisor for Digital Transformation at Nexia Maldives.