Navigating the perilous road of Generative AI

By Rachel Teng

At the recent Asia Tech Summit held in Singapore, AI experts and government leaders came together to discuss the future of AI governance approaches, amidst rising ethical dilemmas surrounding generative AI technology.

Keith Strier, Vice President of Worldwide AI Initiatives, Nvidia (right) and Ahmed Mazhari, President of Microsoft Asia (left), speaking at the ATxSummit held in Singapore on 6 and 7 June. Image: Asia Tech x Singapore 2023

In June 2023, a New York federal court faced an “unprecedented circumstance”: a lawyer had used ChatGPT to conduct legal research for court filings. The lawyer cited non-existent cases provided by the artificial intelligence chatbot, because he “did not understand that it was not a search engine, but a generative language-processing tool.” 

 

This incident raised several questions about the rapidly emerging generative AI universe. First, do end users truly know what generative AI is suitable and – more importantly – unsuitable for? Second, when generative AI hallucinates, who is liable? Finally, will AI governance and legal systems catch up in time to mitigate the ramifications of generative AI gone wrong? 

 

At the recent Asia Tech Summit hosted by Singapore’s key tech statutory board, the Infocomm Media Development Authority (IMDA), AI experts and government leaders from across the world came together to discuss the AI conundrum. 

 

Democratic, almost to a fault 

 

The top 500 supercomputers are concentrated in only 34 countries. Effectively, this means that 80 per cent of the world does not have a computer large enough to train a large AI model, pointed out Keith Strier, Vice President of Worldwide AI Initiatives at US based tech conglomerate, Nvidia. 

 

“We often talk about the digital divide and less about the AI divide specifically. But the unique proposition that Gen AI poses is that in countries that don’t have the infrastructure – the majority of the world – it democratises access to AI through natural language processing,” said Strier. 

 

Anyone in the world with access to the internet and a phone regardless of education or economic background can harness the benefits of this supposedly equalising technology. “Nothing of this generality and power has revolutionised the world, perhaps since the discovery of electricity and fossil fuels,” said Blaise Aguera y Arcas, Vice President and Fellow at Google Research. 

 

But it is also this generality that poses problems for regulations and safeguards to be in place. First, countries may have different attitudes towards AI and the risks associated with it. Second, distinct groups of people may use generative AI in different ways.

 

To draw an analogy with land transport management, bicycles, cars, and trucks will each have distinct sets of rules governing their operations. But generative AI is like a one-size-fits-all automobile, and the regulator would not know if the user intends to use it as a bicycle, car, or truck, said Dr Ansgar Koene, Global AI Ethics and Regulatory Leader, EY. 

 

In turn, it is challenging to tell which set of risks the user might be exposed to, and which set of rules to impose on the user to guarantee some level of safety. 

 

Who is liable? 

 

The question of liability arises when AI is used inappropriately, or gives incorrect results that result in serious consequences. In the case of the lawyer who used ChatGPT, it was first and foremost the lawyer’s liability to factcheck his court filings. But Kay Firth-Butterfield, Executive Director, Centre for Trustworth Technology, World Economic Forum Centre questioned if other liabilities need to be in place when AI models feed answers that are entirely hallucinated. 

 

“If there isn't a liability attached to giving answers that are plainly wrong, what is the incentive to actually make those answers better?” said Firth-Butterfield. This becomes an even more dire concern when applied at scale to industries that may most benefit from AI, such as healthcare. 

 

“If an ER doctor who is pressed in the ER and gets incorrect diagnoses from a large language model, whose liability is it – the doctor who asked the question, or the people who trained the model that provided incorrect information?” she asked. 

 

Doctors and lawyers aside, it is even more important to consider liability when non-experts are using generative AI, because the layperson may not know where or how to factcheck information they attained using AI. 

 

“It is so easy for people who don’t understand how AI works to be suckered into thinking it’s their friend, the new love of their lives, or help them decide who to vote for during the elections,” said Firth-Butterfield. Most recently, a young Belgian man committed suicide after conversing extendedly with an AI chatbot that “encouraged” him to sacrifice himself to mitigate the climate crisis.

 

Machines should not pretend to pass themselves off as human beings, and must always identify themselves as such. “There are some of us who have been trying to practice responsible AI since 2014, and this is one of the most obvious tenets of transparency,” said Firth-Butterfield. 

 

Agreeing, Dr Ansgar Koene, Global AI Ethics and Regulatory Leader at Ernst & Young added that the onus is also on developers to advertise their AI tools responsibly and accurately. 

 

“When we talk about AI, we are always reaching towards anthropomorphic language to describe what systems can do, which always leads to an impression of the system being more human-like than they actually are,” he said. Simply put, the “I” or “me” that an AI chatbot might use is merely a marketing or user experience facade. 

 

When a generative AI model “hallucinates”, it is simply operating in the way it is supposed to operate, generating content that a human user would most likely accept as a good answer. “That is the operating parameter around which it has been trained – none of the training was about making sure the output is true,” said Koene. 

 

Who are we missing? 

 

When AI technology begins with data, and optimised its behaviour based on the data that it is being fed, one can never say that the technology is neutral or unbiased, according to Prof Yi Zeng, Professor and Director, International Research Center for AI Ethics and Governance at the Institute of Automation, Chinese Academy of Sciences. 

 

ChatGPT has an estimated 100 million users. Yet, 2.9 billion people on earth are still not connected to the internet. “This actually means that we are getting answers drawn from data created mainly from the Global North, and mainly from men,” pointed out Firth-Butterfield. 

 

These biases have already led to grave repercussions. In March 2022, the Dutch tax authority was revealed to have ruined the lives of thousands of women and children after using a self-learning algorithm to profile those who might have conducted child care benefits fraud. 

 

The authorities penalised families over mere suspicion of fraud based on the algorithm’s outputs, pushing many low-income and ethnic minorities into poverty, divorce, and depression. 

 

While the issue of data inclusion remains unresolved, the matter of data exclusion faces a similar predicament. “What happens if people don’t want their data to go into these large language models, like we’re hearing from indigenous communities? What happens if their voices aren’t included?” asked Firth-Butterfield. 

 

At the end of the day, public trust is a long-game strategy, said Ben Brooks, Head of Public Policy at Strategy AI. “If people don’t trust that their data is going to be protected, we’re never going to get to a point where people are comfortable using AI in industries like healthcare or law,” he said. 

 

Nevertheless, speakers at the Asia Tech Summit found it encouraging that talks of regulations are in place at this early stage. “I think we’re positively benefitting from the fact that this technology is only about six months old, and we’re already accepting regulations as the way forward,” said Ahmed Mazhari, President of Microsoft Asia.


Also read: The servers that will power the modern cloud-first public sector of 2023