Is humour the key to better AI governance? Audrey Tang thinks so.
By Apolitical
An interview with Audrey Tang, Cyber-Ambassador-at-Large.
-1743652265317.jpg)
What if we could build a civil service from the ground up with AI at its core? Audrey Tang and Apolitical CEO Robyn Scott explores what that could look like. Image: Audrey Tang on Apolitical
Audrey Tang was named on Apolitical’s Government AI 100 2025. The interview was conducted by Robyn Scott (Apolitical Co-Founder and CEO) — with help from some excellent questions from the Apolitical’s AI in Government community. It was edited by Christina Obolenskaya.
What if we could build a civil service from the ground up with AI at its core?
This conversation between Audrey Tang and Apolitical CEO Robyn Scott explores what that could look like — how AI can assist rather than replace, how citizen participation can scale through AI-enabled deliberation and how trust anchors like academic institutions can stabilise governance in a polarised world.
Along the way, they touch on humour as a tool for governance, the role of open-source safety measures and why The Hitchhiker’s Guide to the Galaxy offers lessons for AI policy.
As governments adopt AI, the goal isn’t to replace civil servants — it’s to make their work more effective, responsive, and human. Get the full conversation below.
RS: I prepared for this interview by asking members of a few Apolitical Communities what — if they had the chance to interview you — they would ask. First of all, let me just say how excited they were! One asked to please pose the following question: What was the role of The Hitchhiker's Guide to the Galaxy *by Douglas Adams *in shaping your thinking and worldview?
AT: Oh, that’s a great question! I encountered the “Babelfish” through Alta Vista, an early internet translation programme. And I was like, ‘Why would an internet search engine company call its translator after a fish?”
I looked it up and started reading about the fish and everything, and immediately, I really fell in love with the idea of technology being used in a way that invokes humour, sarcasm, and so on.
Basically, not in a dominant way. The Babel Fish lives inside the ear and, when inserted, makes it possible to cross the language divide between any species.
This is what technologies can do if implemented correctly. For me, the Babel Fish helps recontextualise technology so that it fits back into humanity.
To me, the most important thing is: – “Don't panic”. This is such a core component of The Hitchhiker’s Guide to the Galaxy.
We should approach or each and every emerging technology humanistically. A 'humour over rumour' approach, so to speak.
We can face emerging technologies with open curiosity, which is much better than facing them with a sense of doom. I often return to this place of mind.
RS: Humour is not a word that traditionally comes up in the context of government. Public servants' work is quite serious, especially given its impact on citizens. Have you seen humour deployed successfully, perhaps even in the policymaking context?
AT: Definitely – the entire counter-infodemic playbook in Taiwan is called 'humour over rumour.’

Very early on, in January and February 2020, we knew there would be polarised attitudes toward mask use. One side said N95 masks were the only effective ones, while the other argued that masks harm you.
You could feel each side would start a fire on this issue.
Within 24 hours, we posted a meme featuring a very cute Shiba Inu putting her paw to her mouth, saying, 'Wear a mask to remind each other to keep your dirty, unwashed hands off your face.' That’s humour over rumour.
Humour is the common ground. It’s the one thing both sides can agree on.
Once people looked at the meme, shared it and laughed at it, not only did it shift attention away from the polarising debate, but we measured an increase in tap water usage — people started washing their hands more.
This simple meme of a cute dog connected handwashing with public health and diffused the situation.
As a result, we never had an anti-mask culture war in Taiwan. Similarly, we never had an anti-vaccine culture war either because we used very similar measures.
RS: Building on what you said about polarisation, we are living in an increasingly balkanised world. It’s super interesting to hear how you used humour to counter some of this. When it comes to emerging technologies, what kinds of governance models work in this new world to counter this balkanisation?
AT: Traditionally, government has been about broadcasting — broadcasting technologies from the last century enabled a few officials to speak and millions would listen.
But in a fragmented, polarised world, that top-down approach doesn't work anymore. The vertical, institutional model of trust is breaking down.
We need to flip the approach and do broad listening. Instead of broadcasting, broad listening means listening at scale so that each citizen can participate through online citizen assemblies and juries, and their perspectives are sense-making devices.
This makes sure people can have a group selfie of what people disagree on, what the fault lines are and what bridges can be built, like the Shiba Inu meme that bridged divides.
AI-assisted facilitation helps uncover this uncommon common ground. Each person can then receive a tailor-made response saying a) what policies actually changed because of their input, b) what groups take different sides from you, and c) what bridging ideas connect those groups.
The same principle has been applied to Community Notes, which has now been adopted by Meta as the new gold standard. We are working on what's called pro-social media, which takes the idea of Community Notes and applies it to the main feed.
Instead of viral misinformation being fact-checked after the fact, the main post itself will highlight the fault lines and the connective tissues of uncommon ground.
RS: What changes do existing institutions need to make to their ways of working to do “broad listening” at scale? Is it possible?
AT: There are two essential ingredients. First, broadband must be recognised as a human right — without it, digital engagement inevitably excludes people.
Second, leaders — mayors, ministers, MPs — must commit in advance to taking the process seriously, even without knowing the outcome. If a minister publicly commits to this approach, acknowledging, I will see this through, it gives the process real weight.
In Taiwan, we take this even further: unless a proposal violates the law or the laws of physics, we commit to turning it into policy. Through an e-petition, 5,000 citizens can trigger a vote.
While it can be vetoed with a strong justification, if it aligns with the views of a representative statistical jury, the government is bound to act on it.
RS: Given the current decline in trust, there’s a unique window of opportunity for institutions to adopt these reforms — partly out of necessity. As trust erodes, they face real risks. Could you share more about the key anchors of trust in this process? What helps ensure that these commitments hold weight?
AT: Ten years ago, Taiwan was deeply polarised. In 2014, public trust in government was at just 9% — in a country of 24 million people, that meant nearly 20 million were sceptical of whatever the president said.
In this era of low trust, we found that certain institutions still held credibility. For example, the National Academy was widely trusted because it sat above any specific ministry. Within it, experts in information sciences were seen as nonpartisan, so we aligned with them.
At the same time, the Taiwanese equivalent of Reddit, PTT, had been hosted by National Taiwan University (NTU) for over a decade. It operated as an open-source platform, regulated by self-moderation like subreddits, with no shareholders or advertisers.
Because of this, all political parties recognised NTU as a neutral ground for legitimacy.
Whether it’s academic institutions, consumer rights organisations, or even local libraries, every society has trust anchors that can serve as credible neutral partners in public decision-making.
RS: Relatedly, are you seeing any promising examples of governments prioritising safety over dominance, especially around emerging technologies?
AT: Yes — the biggest risk is allowing a small group to unilaterally decide how AI is built and deployed. Only a handful of nations have the power to compete for dominance, while the other 200+ countries are effectively in a race to safety, as they have little control over AI development.
When people around the world deliberate on AI governance, the strongest point of consensus is the desire to avoid a single country or government having full control over global AI.
Surveys show 60% agreement on this, with even stronger sentiment in South America, where 75% of respondents oppose a small number of nations monopolising AI governance. Instead, the vast majority favour a race towards trust and safety.
RS: Could you share more about safety? It’s often misunderstood as a barrier to innovation, but how can governments balance experimentation with security? Are there any projects or approaches that stand out as successfully combining both?
AT: In Taiwan, when we crowdsourced the number one safety issue people wanted to tackle last year, the overwhelming response was fraud.
It’s now very easy to deepfake celebrities who chat with you online. Two years ago, you could tell because of glitches, like six fingers. But last year, those flaws were fixed, making deepfakes indistinguishable.
We used deliberation and alignment assemblies to draft a law: if Facebook posts an ad featuring a celebrity endorsing something, it must secure a digital signature from that celebrity.
If it fails, and someone gets scammed, Facebook is liable for the lost money. Starting January 1st, this law took effect, and platforms like YouTube updated their policies to require strong KYC (Know Your Customer) for advertisements.
Safety isn’t about banning synthetic media—it’s about coupling it with provenance. My audiobook on Plurality.net is narrated by a synthetic voice, but I signed off on it. You can check GitHub and see my digital signature verifying that I produced it.
This idea of safety technology is a form of technology. It is not the kind of luddite way of saying let's globally stop and pause synthetic media technology.
Instead, let's couple it with KYC, digital signature, personhood credentials,widespread decentralized identifiers, wallets for everyone, and decentralized authentication.
Then our safety technologies can make emerging technology into what we call trusted tech.
RS: One question we often hear from governments is: How can AI governance avoid being treated as a monolith? Do you have any frameworks for distinguishing what should be open to experimentation and what needs to be safeguarded?
AT: First of all, we ask the people. For example, in the global dialogues you can see that anything that relates to personal care and companionship in kindergartens by teachers — many strongly oppose care bots in these environments due to concerns about over-reliance and addiction.
On the other hand, if an AI bot is helping the caretakers care, suddenly that’s accepted. There’s no one-size-fits-all solution; every culture has its own norms.
Emerging AI risks often come from misuse outside of government control, and governments must tackle them anyway. If governments rely on pre-AI technologies, the sheer volume and scale of these threats would overwhelm officials.
You have to fight AI with AI—this isn’t just an experimental concept; it’s a necessity.
Some key areas for AI deployment include resilience-building and assistance to caregivers—places where civil society has already expressed a need for help. These should be prioritised, while everything else can remain in controlled sandboxes.
RS: Governments are bringing swords to a gunfight when it comes to AI regulation. You’ve described AI as “assistive” intelligence—like eyeglasses—which I love as a metaphor. What does that mean in practice for civil servants?
AT: The key idea here is the contrast with eyeglasses.
Imagine a device that, like vision, takes in all available information — but instead of simply enhancing what you see, it uploads everything to the cloud, decides what to show or hide, inserts advertisements, and then feeds it back to your retina.
That wouldn’t be assistive intelligence; it would be a tool for manipulation — something straight out of Black Mirror.
Eyeglasses align technology with human needs, while manipulative AI aligns human behaviour with technology’s demands. It’s a fundamental difference in direction. Assistive intelligence should clarify our vision, not control it.
This means AI should enhance civil servants' effectiveness rather than replace them. Routine tasks — like summarising public commentary — can be handled by AI, much like what a large budget could achieve. But the core of public service — human-to-human work — remains irreplaceable.
RS: Building on that, have you seen AI successfully augment human-to-human interactions?
AT: AI-enhanced video conferencing improves real-time translation and summarisation.
For example, I use an AI assistant during conversations — if I miss something, I can simply ask, What did I just miss? and it fills me in. This ensures accuracy while allowing me to stay present and focus on non-verbal cues.
AI acts as a capable assistant, keeping discussions on track and preventing topic drift, but human interaction remains central. In fact, by handling routine tasks, it frees up my attention to pick up on micro-expressions and other subtle signals.
For example, I use Mac Whisper, running locally on my MacBook with 96GB of RAM. It doesn’t transmit data to the cloud, ensuring privacy. LM Studio powers AI endpoints locally, and integrates with other tools.
RS: Let me shift gears and ask a completely different question: What would it take to build an AI-driven civil service in the future? What key components would be essential?
AT: We need a systematic approach to determining which tasks still require human oversight and which can be fully automated — not through generative AI, which still hallucinates, but through reasoning-based AI that creates tailored programs for specific functions, such as translation.
A large general-purpose model can understand requirements from both sides and synthesise a smaller, more specialised system — perhaps not even an AI model, but a piece of software designed for the task.
By connecting these systems, we can eliminate routine work while ensuring proper inspection for safety. Once a process is deemed safe, human oversight is no longer necessary.
For human-to-machine interactions, we can align AI with human needs by designing more intuitive user interfaces — previously constrained by GovTech development capacity, but no longer.
Service design principles and user journey mapping can help create interfaces that feel natural and ergonomic, enhancing rather than obstructing human work.
Ultimately, public service should be reoriented around human-to-human interactions, with AI assisting rather than replacing these connections. AI can transcribe, summarise, and support — but it should never substitute real human engagement.
RS: Would citizen participation be central to this AI-first model?
AT: Traditional systems limit citizen participation due to logistical constraints. AI-enabled deliberation allows large-scale engagement without such limits. AI-facilitated town halls could ensure ongoing consultation.
Transparency is key — communities should shape AI training data and decision-making rules. With legal frameworks ensuring accountability, AI can enhance the quantity and quality of public deliberation.
RS: One final question. We started this interview with a question from the Apolitical community. What would you like to share back with those 250,000 public servants and policymakers?
AT: When we see the internet of things, let's make it an internet of beings. When we see virtual reality, let's make it a shared reality.
When we see machine learning, let's make it collaborative learning. When we see user experience, let's make it about human experience. Whenever we hear the singularity is near, let's always remember the plurality is here.
* On December 9, 1997, Digital Equipment Corporation (DEC) and SYSTRAN S.A. launched AltaVista Translation Service at babelfish.altavista.com which was developed by a team of researchers at DEC .
It is also the name of the small, yellow and leech-like fish in The Hitchhiker’s Guide to the Universe which, when placed in your ear you can instantly understand anything said to you in any form of language.
The article was originally published on Apolitical.