Digital consent is breaking down – governments need a new model
By Mohamed Shareef
As governments start to deploy AI more, the design choices made to existing digital foundations will determine whether citizens retain any real control over their data.
-1777979764478.jpg)
While consent mechanisms exist, they rarely produce meaningful consent in practice - and this matters as governments now deploy AI on top of this foundation. Image: Canva
Ask any government digital leader in Asia Pacific whether citizens meaningfully consent to how their data is used, and most will say yes.
They point to privacy policies, cookie banners, and terms of service checkboxes.
These mechanisms do exist, but they rarely produce meaningful consent in practice.
This matters because governments across the region are now deploying artificial intelligence (AI) on top of this “consent” foundation.
The design choices being made today will determine whether citizens retain any real control over their data, or whether the fiction of consent becomes permanently embedded on a national scale.
What consent is supposed to do
The notice-and-consent model sits at the centre of virtually every data protection framework in the Asia Pacific region.
While Europe’s General Data Protection Regulation (GDPR) has pushed global standards forward, many Asia Pacific frameworks still rely heavily on the notice-and-consent structure.
Singapore’s Personal Data Protection Act (PDPA), India’s Digital Personal Data Protection (DPDP) Act, Indonesia’s Personal Data Protection (PDP) law, and comparable frameworks across the region all rest on the same premise: tell people what you are doing with their data, ask for their consent, and let them say no.
The premise followed in the region at first glance appears reasonable. However, in practice, it has not delivered the desired result.
The reason was simple.
Reading all the privacy policies an average person encounters in a year would take 76 working days.
A Deloitte survey found 91 per cent of people accept terms and conditions without reading the fine details.
Research on third-party data flows has found that fewer than 15 per cent were disclosed in privacy policies.
The notice-and-consent model assumes that it was giving an informed, legally literate user real bargaining power. That person rarely exists in practice.
Why this is a government problem
Most commentary frames broken consent as something companies do to people. It is not only that.
Governments across Asia Pacific have spent the last decade building digital service infrastructure that embeds this same architecture into citizen interactions.
Public sector portals use third-party analytics. National service apps integrate commercial SDKs.
Citizen communication happens over platforms, WhatsApp, Telegram, Facebook, which operate under entirely separate privacy regimes.
Government procurement has often prioritised capability over consent-aware design, because capability has been the evaluation criterion.
When a citizen applies for a permit, registers a business, or accesses health records through a government portal, they face the same binary: consent or no service.
With a commercial platform, a user can theoretically walk away.
With government service, they often cannot. You cannot opt out of paying taxes. You cannot refuse to register your vehicle.
The coercive dynamic that is already problematic in the private sector becomes structurally inescapable when the service provider is the state.
Several governments in the region are also building national digital identity systems that will mediate consent across the entire public and private digital ecosystem for decades.
The design choices being made now will determine whether those systems give citizens genuine control or formalise the current consent model at scale.
Why consent is failing in practice
Beyond the structural problems, the operational reality is worse than most officials realise.
Acceptance or exclusion
GDPR requires consent to be “freely given.” But what real choice does a user have?
Give consent and get the service, or decline and get nothing. No negotiation, no partial opt-in.
When the only exit is off the platform or the government portal, it is not freely given.
Dark patterns are pervasive
A 2020 study of 10,000 EU websites found that 56 per cent used pre-ticked boxes for nonessential cookies and 72 per cent hid the rejection option behind multiple menus.
Sweden’s DPA fined companies in April 2025 for designing a bright “Accept” button alongside a near-invisible text link to reject.
California fined Honda US$632,500 (S$807,960) the same year for making opt-out harder than opt-in. These were not accidental design choices. They are systematically optimised.
Third-party data flows are invisible
Fewer than 15 per cent of third-party data flows have been disclosed in privacy policies. Users consent to the site they visit. They have no meaningful engagement with the dozens of ad-tech vendors and tracking systems operating behind it.
AI has made consent retroactively fragile
Meta began training its AI models on 20 years of EU user data in May 2025, relying not on explicit consent but on “legitimate interest.”
Posts from 2012, photos from 2015, comments from 2010: all fed into model weights that cannot be unwound.
Privacy group NOYB estimated potential class action liability at over 200 billion euros (S$298.50 billion). The commercial incentive is clear: asking for consent introduces the risk of refusal.
System design compounds the problem
Withdrawing consent is rarely as easy as giving it, despite legal requirements. Silence is often treated as implied agreement. Terms change unilaterally and continued use counts as acceptance.
Compliance frameworks are being followed at the letter while their intent is systematically bypassed.
The point of no return
Governments writing AI governance frameworks right now need to treat consent at the point of AI training as a distinct and more consequential question than consent at the point of data collection.
This distinction has not yet been clearly addressed in most legal frameworks in the region.
As a result, governments risk extending legacy consent assumptions into systems that fundamentally change how data was used.
Across Asia Pacific, governments have been deploying AI for welfare screening, healthcare triage, fraud detection, and public service allocation. Many of these systems draw on citizen data collected under frameworks built for a different era.
The question of whether citizens ever meaningfully agreed to have their data used for algorithmic decision-making has been largely unanswered.
Current techniques for removing specific personal data from model weights remain highly limited and impractical at scale.
Regulators continue to treat data deletion as tractable, even though emerging evidence suggests it is far more complex in AI systems.
Once data is in a model, the right to be forgotten becomes difficult to operationalise.
What governments in Asia Pacific should do
Calls for “more transparency” and “better public engagement” have appeared in policy documents for years without shifting the underlying dynamic.
Here is what would actually change outcomes.
1. Procurement standards could explicitly prohibit dark patterns through enforceable UX design requirements
If a vendor’s consent interface makes acceptance easier than rejection, or uses pre-ticked boxes for non-essential processing, it should not pass government procurement.
National digital identity frameworks must treat consent as a technical capability, not a legal checkbox. Citizens should be able to see what data has been shared, with whom, and on what basis, and revoke it in ways that are technically enforceable.
Singapore’s Singpass model has elements of this. It should be the minimum standard across the region, not an exception.
2. Governments should audit their own portals for third-party data flows
Government websites routinely carry commercial tracking infrastructure that citizens have limited visibility into and limited ability to refuse.
In many jurisdictions, the findings may be surprising.
3. Any AI system making decisions about citizens needs a separate consent architecture
The fact that someone consented to their records being stored does not mean they consented to those records being used to train a model that will later assess their welfare eligibility.
4. Governments should invest in use restrictions alongside consent frameworks.
Consent captures what was agreed to at one moment. Use restrictions determine what is permissible regardless of what was agreed.
For sensitive government data, use restrictions are more protective and more enforceable.
The current model is increasingly misaligned with how digital systems actually operate. Citizens often click agree because they have limited practical choice. They do not read the policy because reading it would not change anything.
Governments in the region have the regulatory authority, the procurement leverage, and in many cases the technical infrastructure to change this.
The opportunity is to apply those tools to their own digital operations, not just to the private sector.
The consent economy was built with governments watching. Rebuilding it will require governments leading.
Read also: Everyone is betting on leapfrogging AI. The World Bank President doesn't believe it, Apr 13, 2026
-----------------------------------------------------
Mohamed Shareef is a former Minister of State for Environment, Climate Change and Technology in the Maldives (2021-2023). He previously served as Permanent Secretary of Science and Technology Ministry (2019-2021) and the Chief Information Officer at the National Centre for Information Technology (2009-2014) and led the development of the country's national digital public infrastructure. He also served in the academia including as a researcher at the United Nations University. He currently serves as Senior Advisor for Digital Transformation at Nexia Maldives.
