What will ethical AI governance in the age of ChatGPT look like?

By Yogesh Hirdaramani

Even as the development of AI accelerates to the point of perfectly simulating human conversations, the discourse on the responsible use of AI has largely been driven by geopolitical heavyweights. GovInsider speaks to Mark Findlay, professorial fellow at the SMU Centre for AI and Data Governance, on the pathways towards ethical AI governance emerging from Asia in the age of generative AI.

Will Buddhist countries like Thailand develop AI regulations founded on Buddhist principles like compassion and selflessness? Image: Canva

Generative AI like ChatGPT can be used to assist civil servants in producing first drafts, write healthcare appeal letters, and possibly serve as the most powerful search engine the world has seen yet.

 

That’s if it can get over its nagging problem of producing misinformation that sounds deeply persuasive. According to researchers interviewed by The New York Times, ChatGPT could become the Internet’s most virulent tool for spreading misinformation. Generative AI could soon hold an outsized sway over the decisions people make and the predictions we act on, even if we do not fully understand how these algorithms work.

 

Beyond misinformation, generative AI may also be susceptible to adversarial attacks when fed malicious or offensive data. In turn, such AI can result in discrimination against vulnerable groups, especially when generative AI is used to guide decision making. 

 

And as people become more reliant – or even dependent – on generative AI, the risks associated with it will only increase. 

Ethics plus… 
 

This is why it’s critical for governments to move away from relying solely on ethical frameworks and the responsible compliance of big data holders towards stronger regulation, says Mark Findlay, professorial fellow at Singapore Management University’s Centre for AI and Data Governance. 

 

Current ethical frameworks around AI that governments have deployed emphasise norms such as transparency, privacy and accountability and centre the importance of self-regulation by companies. 

 

However, Covid-19 ushered in an era of mass data sharing between the public and private sector to combat the spread of the virus, which meant that “many of the personal data protections that you would expect as a result of ethical practices just disappeared,” says Findlay. Now, countries are moving towards “ethics plus”: ethical frameworks accompanied by stronger legal controls.

 

But as generative AI picks up steam, the clock is racing. Consider the case of ChatGPT, which became the fastest growing consumer application in history, accumulating over 100 million users in a mere two months – a feat that took TikTok 9 months to achieve.

 

“The excitement that’s been generated about these super chatbots means that there is going to be a situation where we’re shovelling data into the chatbot, thinking we’re getting benefit back… but we have no idea where the questions go and how ChatGPT is profiling you,” says Findlay. 

 

This may not seem significant now, but if such data gets sold to advertisers, people may soon be “pummelled” by a barrage of information from service providers, he explains. This is why stronger legal controls may be needed to ensure user data remains protected, and that users understand how their data is being used. Tech companies may also need to adopt best practice models to ensure better operational activities with regards to how user data is being employed.

…Regulation
 

The European Union (EU) is on the forefront of developing regulation around AI with its proposed Artificial Intelligence Act. The act classifies AI systems into a four-tiered risk framework based on the potential risk to fundamental rights such as privacy and safety, ranging from no-risk to unacceptable. High risk tools may face increased scrutiny from regulators and may need to undergo rigorous risk assessments.

 

However, the rapid growth of generative AI platforms in the recent months have stymied efforts by EU lawmakers to reach a consensus on the draft bill, reported Reuters just this week. Even as they wish to introduce stronger legal controls, lawmakers are seeking to ensure a conducive space for innovation and further AI development.

 

Similarly, Asian societies will start questioning the value of relying solely on ethical frameworks as a way of managing the data use of AI platforms, says Findlay. And because Asia lacks a leadership role when it comes to the development of software and tech, countries may assert themselves more strongly when it comes to the question of ethical governance and develop their own frameworks for AI regulation, he says.

 

For example, countries might rely on local ethics to develop regulations that are context-specific and responsive to community concerns. Findlay suggests that Buddhist societies may choose to emphasise the role of compassion more than the role of rights, whereas countries like Malaysia and Singapore might emphasise community solidarity, or the kampong (village) spirit. 

 

At the same time, the proliferation of different standards could potentially hinder cross-border innovation. Without a shared set of global regulatory standards, companies will face increased costs in delivering AI products and services in different countries, as new iterations may have to be developed to meet differing regulatory requirements.

Singapore crowdsources AI benchmarks
 

Even though the question of ethical governance has become more pertinent and the case for global standards continues to mount, benchmarks for ethical principles such as fairness, explainability, and safety are often specific to to individual use cases, says Wan Sie Lee, Director, Development of Data-Driven Tech at the Infocomm Media Development of Authority of Singapore. 

 

As AI technology evolves, a balance must be struck between government regulation and tech innovation. In view of this, the Singapore government has adopted an approach of voluntary adoption of government guidance, shares Lee with GovInsider.

 

The country has rolled out AI Verify, an AI Governance Testing Framework and Toolkit, which aims to help AI system owners to be more transparent about their AI through standardised tests, says Lee. 

 

These tests help measure companies’ claims about their AI systems against internationally accepted principles. For example, the technical tests will focus on testing AI models against metrics such as explainability, fairness, and robustness. 

 

The reports generated from these tests can help companies improve the performance of their systems and demonstrate to stakeholders that their platforms are aligned with ethical standards. 

 

Through the participation of tech players in AI Verify’s international pilot, the country aims to crowdsource industry best practices and build benchmarks. There are plans to share these benchmarks with international standards bodies as part of Singapore’s contribution to international standards development, Lee says.

 

Findlay shares that this approach is a good starting point, but it may still be too top down. Though it helps companies evaluate the ethical principles of their AI, it may not capture whether ethical principles translate to developers at the front end of the ecosystem. He also cautions that irresponsible companies may selectively evaluate certain good practices while omitting others.

Emerging digital self-determination movements

 

As the conversation on ethics and regulation evolves, it is critical that data subjects and data users negotiate respectful relationships to fill the gaps, says Findlay. Currently, there are many opportunities for companies to use and sell data without our knowledge, and communities need to develop pathways to better hold them accountable

 

This may seem like a lofty goal, but it is important for data subjects to recognise that “it doesn’t matter whether we’re rich, poor, whether we come from Africa or from the richest country in the world, we generate the data. And if we stop generating the data… then companies would collapse,” says Findlay.

 

He points to the case of Facebook. When Apple changed its operating system to allow users to opt out of having their online behaviour tracked, analysts predicted that these changes would cost Meta US$10 billion.

 

He notes the rise of open finance within big financial institutions, which aims to put control of financial data back into the hands of customers. After all, the best people who can help big banks check the quality of their data are the customers themselves.

 

Vulnerable data subjects globally have begun organising to better hold big data to account. Gig workers from India to Indonesia have developed workarounds to push back against unfair practices and opaque algorithms, reported Rest of World. In Amsterdam, workers have successfully sued Uber for lack of transparency.

 

Decision makers grounding AI regulation in local ethical principles may help ensure AI remains trustworthy and human centric, even as global standards take shape. This will help ensure AI and big data function as a resource that serves the communities they are embedded within, rather than an exclusionary force primarily serving corporate players.