Proactive AI governance the mantra for HK’s Privacy Commissioner
Oleh Si Ying Thian
Privacy Commissioner Ada Chung shares that data protection authorities must pivot from being policing bodies to innovation enablers, guiding technological change by cultivating a lawful, secure, and supportive environment.

Trained Barrister and Public Accountant, Ada Chung highlights that law and public service must evolve with technology. Image: Office of the Privacy Commissioner for Personal Data (PCPD)
For the Privacy Commissioner of Hong Kong, China, Ada Chung, data protection means proactive governance.
She believes that data protection authorities like HK’s Office of the Privacy Commissioner for Personal Data (PCPD) can help de-risk technological change and actively guide progress by creating and maintaining a clear, lawful and secure environment conducive to development.
Forward-looking and actionable governance were two traits that come across for this writer as she talks to Chung for GovInsider.
Chung’s forward-looking approach is evident in Hong Kong’s strategic approach to artificial intelligence (AI) governance, where the city took a pioneering step by making personal data protection and privacy the central focus of its frameworks.
When the PCPD issued its first guidelines in August 2021, this specific integration of privacy principles throughout the AI lifecycle stood out, contrasting with the broader AI frameworks that other international jurisdictions released earlier.
Continuing this actionable trend, the PCPD addresses a major and immediate challenge of staff technology use with its checklist for the use of Generative AI (GenAI) by Employees issued in March 2025.
The checklist is intended to be a practical and actionable resource tailored to help organisations draft internal policies for staff use of GenAI.
To subscribe to the GovInsider bulletin, click here.
The role of public service in a tech-driven world
Trained as both Barrister and Public Accountant, Chung highlights that law and public service must evolve with technology.
“Public interest should serve as the guiding compass,” she notes, citing the PCPD’s approach to AI governance as an example of a public agency setting the foundation and infrastructure to enable progress.
At the start of the AI boom, the first guidelines around AI governance in 2021 led by the PCPD establishing the fundamental standards securing the public’s interest before widespread adoption that followed in the years to come.
In June 2024, the PCPD released the AI Model Framework to provide practical measures for organisations to establish robust AI governance strategies.
Recognising the evolving, practical needs of the public, the PCPD has rolled out, along with the checklist for employees, a guide with tips for users of AI chatbots to empower individuals to use these technologies safely and responsibly.
To drive high adoption, the PCPD’s guidance is designed to be actionable, by structuring the frameworks around practical business processes of AI users, using plain language and offering readily adoptable, specific recommendations.
These recommendations range from how organisations can label or watermark AI-generated content to tips on how to filter out AI-generated content that may pose privacy and ethical concerns.
“Development and safety of AI, which includes the protection of privacy, are essentially two sides of the same coin,” Chung notes.
On how privacy protection complements technological advances, she explains that “citizens would be more confident about, and would support, technological changes, when they are confident that their privacy rights would remain intact”.
From establishing foundational AI standards in 2021 to rolling out practical checklists for employees, PCPD's policymaking around AI consistently results in actionable, sequential, and targeted resources to protect and inform the public.
Continuous learning
Instead of simply enforcing rules, Chung explains the importance of PCPD to understand the technology, testing its regulations and collaborating with the industry to ensure that data protection is built into innovation process from the start.
Continuous learning can help the agency to understand the risks, as PCPD’s officers
regularly attend workshops to maintain technical literacy and update themselves on the latest developments and risks.
First-hand experience of using the technology itself can help the agency to ensure that the recommendations are actionable and relevant for stakeholders, bridging the gap between policy and implementation.
And finally, actively engaging stakeholders allows the agency to understand their needs and offer customised support like training sessions.
Chung highlights how the practicality of the AI Model Framework is strengthened through extensive stakeholders and industry consultations where the PCPD incorporated valuable feedback from AI developers, users and service providers.
“I believe this collaborative approach significantly strengthened the robustness and usability of the final product, making it more practical for adoption by organisations,” she explains.
Blending local, national and international
Chung shares that the PCPD has considered both global best practices and the Chinese Mainland’s approach of putting “an equal emphasis on development and security of technologies.”
To achieve this balanced approach, the PCPD takes guidance from common themes across international jurisdictions, like adopting a risk-based approach and establishing a
robust AI governance structure.
To ensure its guidance is both relevant and effective for local implementation, the PCPD conducts local ground research through two survey studies and two rounds of compliance checks to understand how Hong Kong companies are using AI.
This is complemented by public awareness campaigns around the risks brought by AI and related enforcement efforts, she adds.
Chung represents PCPD – and actively contributes – on the international front as the co-chair of the Ethics and Data Protection in Artificial Intelligence Working Group and the International Enforcement Cooperation Working Group at the Global Privacy Assembly (GPA).
She shares how her agency co-sponsored two key AI-related resolutions that were unanimously adopted by the GPA, namely the “Resolution on Meaningful Human Oversight of Decisions Involving AI Systems” and “Resolution on the Collection, Use and Disclosure of Personal Data to Pre-Train, Train and Fine-Tune AI Models.”
She shares that the next phase of AI regulation in Hong Kong would be looking at targeted regulation or guidance on specific aspects of AI governance for specific sectors like healthcare or education.
She will also be “keeping a close eye on the developments in agentic AI”, which may warrant “an entirely new regulatory approach,” she says.
Highlighting that legislative amendments may be introduced and when appropriate, she says that “the Chief Executive of the Hong Kong SAR has tasked the Department of Justice with establishing an inter-departmental working group to review the legal framework needed to support a wider application of AI”.