Singapore's lessons on AI governance
By Amit Roy Choudhury
With increasing concerns about the ethics behind AI programs, governments need to step in and craft the rules.
These laws are set in a fictional universe where robots have far surpassed humans in intelligence and ability. The human creators of the first intelligent robots crafted the laws in order to ensure that the machines remained a benign helper, friend and protector of humans. The three laws were hardwired into the algorithms that gave the robots consciousness and ensured that, under no circumstances, would the machines harm humans or allow harm to take place to humans in their presence.
We are now at an intersectional point in history where facts and the fertile imagination of a great author are merging together. AI is helping to make machines hyper intelligent and fears have already been expressed by a host of eminent people, ranging from Elon Musk to the late Stephen Hawking on the future “dangers” poised to the human race by AIs.
Are these fears justified and, if so, what steps need to be taken to ensure AIs do not become like Skynet from the movie franchise Terminator, bent on destroying the human race. To answer this question, it makes sense to take a step back and look at the issue in totality.
Revolutionary change
What exactly is AI and why is it different from other technological advances that we have witnessed over the past two decades?
To understand we need to look around us. A combination of technologies is changing the way we live, work and play in an unprecedented manner. Huge amounts of data are being created thanks to digitalisation of every part of human activity. This data is being analysed, often autonomously, by super intelligent software programs which often use what is called neural networks – that is a process that in some ways mimic the way the human brain works. This has resulted in computer programs being able to interpret events in real time and provide inputs that can be used as actionable intelligence. These programs are referred to as AI.
Much of the global smart city projects and Singapore’s own Smart Nation ambitions are based on this ability to analyse gigantic amounts of date in a blink of an eye and then come out with useful information from the data. This is often done without human touch in the process, with machines talking to other machines and coming up with decisions.
The product suggestions that you get when browsing websites such as Amazon or Google are based on AI algorithms which try to anticipate what you are looking for. More often than not they get it right. No human agent is involved in this process; considering the millions of simultaneous log-ins that these mega sites get, it is physically impossible to do this without automated and intelligent programs. Many of the voice response functions from call centres are actually AI programs mimicking human voice. Some tests have shown that certain AI programs can craft very credible fake news from just a few bits of information.
AI is seeping into every aspect of our lives. For example, AI Singapore has recently announced an S$35 million grant to use AI solutions to arrest diabetes, high cholesterol and high blood pressure progression and complication development in Singapore . Just think about this for a moment. A software algorithm will help you to keep your diabetes or hypertension in control.
Turing test
To be fair, current AI programs are no patch to Asimov’s intelligent Robots. Today’s AI programs are still a long way from passing the Turing test . Devised by Alan Turing, the test is to see whether a computer, sitting in a separate room, will be able to convince a person in another room, with no knowledge about the computer, that it is a person and not a program by responding to random questions. The idea behind the test is that when a computer program (AI) will be able to pass off as a human in such a test it would have attained consciousness.
However, the important point here is that AI programs don’t need to become self-aware in order to pose a potential threat. That is because, like any software program, an AI is good or bad based on the algorithms that are used to write it. These programs are written by humans, and this is where a conscious or unconscious bias can set in.
As AI gets used in real life situations, in areas like criminal justice and law making, education, dispensing of government services and in security, any inherent bias that is built into the AI program can have major consequences.
There is increasing evidence that the use of AI-based facial recognition programs and other automated systems can amplify biases and discriminatory practices. Fears have been expressed of the potential of racial bias with the ability to identify people based on facial recognition.
In a democratic society like Singapore, an AI-based facial recognition program can be used to nab a potential terrorist before any harm is caused. In a police state, the same technology could be used to keep dissidents in check. AI itself does not discriminate, it’s the humans behind the program that are the potential problem.
Privacy concerns
There are also concerns about privacy. With so much digital data available, and with AI programs having the capability of sniffing out relevant data from disparate sources, there is major privacy issues involved in unrestricted access to this data without necessary permissions. In short, an unscrupulous agency could build up an unauthorised but very accurate profile of individuals based on the data available on the internet.
While various technology companies have set up ethics committees to oversee their AI research and products, there is a feeling that more needs to be done. The recent fracas at Google has brought a spotlight on AI and ethics. Google’s Advanced Technology External Advisory Council (ATEAC) was due to look at the ethics around the work Google was doing around AI, machine learning and facial recognition. However, there were protests over the composition of the committee. One member resigned and there were calls for another to be removed. Google dissolved the committee.
This raises two important issues; one is there is an urgent need to oversee the ethical aspects of the development of super intelligent and self-learning computer programs and related to that, the companies developing these programs are not best equipped to go into the ethical issues involved. This is one area where government oversight is required.
In this regard Singapore has been a pioneer and in January this year, the Republic released a framework on how AI can be used ethically and responsibly by both the government as well as businesses as they grapple with the issues that have emerged with the new technology.
According to S. Iswaran , Singapore’s Minister for Communications and Information, the model framework for AI governance would be a “living document” intended to evolve along with the fast-paced changes in a digital economy. It will take feedback from the industry, and will be tweaked when it gets more views.
Earlier this month, Singapore won a top award at the prestigious World Summit on the Information Society (WSIS) Prizes for its work on AI governance and ethics, beating four other finalists. Winners in a total of 18 categories of the WSIS Prizes were announced during an award ceremony at the annual WSIS Forum held in Geneva, Switzerland.
AI is both an opportunity as well as potential threat. As a result, the problems arising from unethical use of AI needs to be tackled immediately. Analysts reckon that thousands of fully mature AI-based services will become available by 2022-23 and hence privacy and ethical concerns need to be addressed now before the genie gets out of the bottle.
While the world may not need Asimov’s three laws, at least not yet, it certainly needs multilateral government initiatives, like the one taken by Singapore, to manage the broader issues behind the use of a revolutionary and powerful technology such as AI.
Amit Roy Choudhury, a media consultant and journalist, writes about technology for GovInsider.