Can AI ethics stand the test of a conflict?
By Amit Roy Choudhury
Ethical AI guidelines are a noble endeavour, but will such guidelines survive a cyber war? GovInsider’s tech columnist discusses.
Image: Mike Mackenzie - CC BY 2.0
What makes all this possible is Artificial Intelligence (AI). Coupled with humongous amounts of data, these pieces of technology, that are essentially mathematical algorithms, can glean a startling amount of insight in real time. Fed with sufficient data, AI algorithms can churn out actionable intelligence in any field.
It is being used for weather prediction, traffic management, banking, medicine discovery, online computer games and increasingly by the armed forces. One of the reasons, as we shall see later, of the sudden rise in the range and sophistication of cyberattacks like advanced persistent threats (APTs) and well-organised fake news campaigns is because of the use of AI tools by nation-state backed hackers.
The versatility of AI has ensured that it is vital for economic growth in an increasingly digitalised global economy. All countries are in the race to develop viable AI capabilities. Singapore is a good example of this as one of the Asian leaders in AI adoption.
AI Singapore (AI SG) is a national programme to boost Singapore’s AI capabilities. It is driven through partnership between the National Research Foundation (NRF), the Smart Nation and Digital Government Office (SNDGO), the Economic Development Board (EDB), the Infocomm Media Development Authority (IMDA), SGInnovate and the Integrated Health Information Systems (IHiS).
Ethics and AI
While AI has changed our lives for the better, it has also ignited a debate on its ethical use with a wide range of opinions. Some advocate that AI is inherently evil, and its use and development should be banned. However, that could be case of throwing out the baby with the bathwater.
AI programmes can be agents for great good, but it is also undeniable that they can also cause great harm. The programmes by themselves are neither good nor bad, they are just algorithms. The way the programmes are used and built is what matters. Facial recognition is a classic example. Governments can use AI driven facial recognition software to identify criminals at a crime scene. The same software, with a few tweaks, can also be used against sections of society or dissidents by totalitarian governments.
In the West, particularly Europe and the US, a lot of debate has been going on about the ethical use of AI. Companies like Google, which is a leader in the development of AI technology, have struggled with controversies both internally with their employees as well as with external stakeholders about the tools they build using AI.
Last year, for example, the company faced a massive push back from its employees for its bid for a US Defence Department AI contract.
In response to criticism, many of them well meaning, US companies which are leaders in the field, Google, Amazon and Microsoft, among others, have set up committees to frame guidelines for the ethical use and development of AI. There is also a lot of ongoing academic discussion on AI and ethics.
According to an Ernst & Young study the controversy over AI exists in four key areas: fairness and avoiding bias, innovation, data access and privacy and data rights.
It is within this context that the US Department of Defence, after several months of deliberations and consultations, announced the adoption of a series of ethical principles for use of artificial intelligence (AI) by the country’s armed forces.
The new principles will form the basis for the design, development, deployment and use of AI by US armed forces. The recommendations are the result of 15 months of extensive consultation among the leading AI experts, the private industry, and other stakeholders.
Five principles
US armed forces AI ethics consist of five principles:
Responsible: The US armed forces will exercise “appropriate levels of judgment and care” in their deployment and use of AI.
Equitable: The department will take deliberate steps to minimise unintended introduction of bias in AI capabilities.
Traceable: The US armed forces users of AI should have the ability to go back through any output or AI process and have an “auditable methodologies, data sources, and design procedure and documentation.”
Reliable: Defining a “explicit, well-defined domain of use,” and lots of rigorous tests against that use.
Governable: The software would be built in such a way that it will be able to stop itself if it sees that it might be causing problems.
The US Department of Defence guidelines have been well-appreciated and is a rare case of a government organisation leading by example and providing a template for the private sector.
The militaries of other nations, like France, are also working on developing guidelines on the ethical use of AI. It is not hard to guess why militaries are interested in AI and ethics. The next major conflict will see extensive use of AI driven military technologies. As a US military spokesman said, recently the nation that successfully implements AI principles will lead in AI for many years.
With extensive digitalisation of infrastructure, cyberattacks to cripple these systems during a conflict is a real and persistent threat. Cyberattacks especially those that fall just short of one that would result in a kinetic response have become fair game thanks to the extensive use of AI.
Social unrest with the skillful use of fake news is also a grim possibility especially during a conflict situation. For example, it now possible, to create deepfakes, which are what is known as synthetic media in which a person in an existing image or video can be replaced by someone else’s likeness.
Deepfakes leverage AI and have added a scary dimension to the pervasive problem of fake news. First noticed in 2017 it is only a matter of time before deepfakes are weaponised. Incendiary videos, released during a time of crisis, can wreak havoc.
In an interesting example of the arms race that has started in the use of AI, the best way to spot deepfakes is to use other AI programmes to spot such fakes. In this case we essentially have two set of competing AI programmes battling it out.
Cyberwarfare & AI
In conclusion there are two major points to consider. The first is that during the next major conflict, cyberwarfare will be an important element in both the offensive as well as defensive capabilities of a nation. The second is that while ethical guidelines in the use of AI by armed forces is noble endeavour, will such guidelines survive the first day of a war particularly if the other side does not place such constraints on its own actions?
The first AI-powered conflict between nations may not involve giant robots fighting each other with laser guns. Rather the fight will happen silently in the cyber sphere between AI programmes – one trying to cripple critical infrastructure and the other defending the infrastructure. Neither side will want to fight with one armed tied behind.
In an interview with Siliconrepublic.com, AI expert David Gunning noted that while the US may not want a military arms race in creating the “most vicious AI systems around… I’m not sure how you can avoid it”. As Mr Gunning says, in any technology arms race no country wants to fall behind and then be surprised.
The bottom line is that neither countries nor individuals can ignore AI or discard it. The only hope rests in being able to understand its abilities as well as limits. Instead of controlling AI, what needs to be controlled is the person using it.
Amit Roy Choudhury, a media consultant, and senior journalist writes about technology for GovInsider.