Tackling legal risks in generative AI
By James Yau
Paul McClelland, Head of Legal & Faculty, IPOS International, Singapore, outlined the various existing legal frameworks which could apply to generative AI, including copyright, personal data, product liability, anti-discrimination, and the law of contract.

While the promise of Gen AI is immense, but so too are its legal challenges. Image: GovInsider
At the heart of artificial intelligence (AI)'s power lies its voracious appetite for data.
Speaking at GovInsider’s Festival of Innovation event, Paul McClelland, Head of Legal & Faculty for the Intellectual Property Office of Singapore (IPOS) International, pointed out that "AI training requires data, a lot of data.”
“But where does this data come from? Often, it's scraped from the internet, raising thorny questions about terms of service violations and potential copyright infringements,” he added.
As a subsidiary of IPOS, a statutory board under Singapore's Ministry of Law, IPOS International is tasked with building Singapore’s future growth as a global hub for intellectual property creation, commercialisation and management.
As generative AI (Gen AI) development and deployment proliferates, both developers and users need to be more mindful about the legal risks.
To subscribe to the GovInsider bulletin click here.
Copyright conundrums
McClelland highlighted the case of Jason Michael Allen, an American digital art creator who created AI-generated image using Midjourney and attempted to copyright it but was ultimately rejected by the US Copyright Office.
The digital picture named Théâtre D’opéra Spatial won the blue ribbon in the Colorado State Fair’s annual art competition category for emerging digital artists in 2022.
The rationale given by the office was that there was no human creativity involved in creating this picture.
“It was purely computational. And under the US copyright system, that is not deserving of copyright protection,” McClelland explained, noting the implication that works generated by AI could not be prevented from unlicensed duplication presently.
These works included software, pictures, texts and videos.
He illustrated this with a striking example, showcasing an AI-generated image of Van Gogh's Starry Night using DALL-E's AI system for text-to-image models.
While this was not exactly identical, McClelland said that the system’s replication of the painting’s expressive choices still classified the image as a copy.
Perils of personal data
Personal data protection laws further raised questions about data access and correction rights in AI systems, McClelland highlighted.
Under Singapore's PDPA (Personal Data Protection Act), a data subject is entitled to access information that you hold about them and request the information to be corrected if it's wrong, he said.
However, given the high cost it takes to train large language models (LLMs), McClelland noted that it is very unlikely that such data is going to be removed anytime soon due to the cost incurred.
He quoted Google's former Chief Decision Scientist Cassie Kozyrkov on this: "Trying to remove training data once it's been baked into the large language model is like trying to unbake a cake. You basically just have to start over."
This “unbaking” problem presents significant challenges for compliance with data protection regulations where McClelland urged public sector officials to be vigilant and compliant about training AI models.
Liability in the age of AI
As AI systems become more autonomous, questions of liability become increasingly complex.
From self-driving cars to AI-powered medical diagnoses, who bears responsibility when things go wrong?
McClelland suggested that the concept of "negligence liability" may apply, where someone whose actions reasonably foreseeably cause harm to another person has a general duty to avoid that harm.
As AI systems are increasingly utilised, it is important for public officials to consider what harm these systems may result in if the training data is incorrect and what steps to take in mitigating these harms. Otherwise, there is potential liability and negligence for actions.
This also extends into breach of contract where an AI tool doesn't live up to the terms stated in a procurement contract, McClelland highlighted.
By understanding and addressing these risks head-on, public sectors officials can harness the power of AI while safeguarding the rights and interests of individuals and society as a whole.
You can watch McClelland’s FOI presentation recording on-demand here.