How can governments implement responsible AI?

By Google

Harnessing the power of AI while minimising the ethical issues that many fear.

“AI is humanity’s new frontier,” said Audrey Azoulay, Director-General of the United Nations Educational, Scientific and Cultural Organisation. “We stand at the dawn of a new era” where AI is being implemented in security, health, education and more, she added.

As the world looks toward this new frontier, how can governments ensure that AI is used responsibly? How can issues such as bias, accountability and practical implementation be addressed?

Google recognises that these issues are of critical importance and took steps to implement responsible AI. Its framework offers insights for the methods and values that governments can use for their own AI systems.

The AI constitution 


Google has created a set of principles under its Responsible AI programme. The principles were developed to ensure AI services would be “in the best interest of societies around the world”, said Tracy Frey, Director of Product Strategy & Operations, Cloud AI at Google.

Among its AI principles is to “avoid creating or reinforcing unfair bias”. AI facial recognition technology could lead to wrongful arrests if it isn’t taught to analyse faces of different races, noted the UN.

AI systems must be accountable. Systems should allow for feedback and provide explanations when possible, and be under human direction and control.

AI should also be “socially beneficial” and have a positive impact on the world. In some cases, this might take the form of making technologies available on a non-commercial basis. This would bring the benefits of AI to as many people as possible.

Putting it into practice


Creating principles is only half the task; these principles must be implemented into the systems in order to create more responsible AI. Frey shared that it’s not possible to create an ethics checklist with ‘do’s’ and ‘don’ts’ for all of Google’s AI programmes.

She now recommends that organisations avoid a one size fits all approach. Every algorithm will deal with different data, tech and uses. The best practice would be to consider ethics in every unique implementation of AI, she said.

Google also ensures review committees for AI systems are diverse. Members of these committees include junior and senior ranking employees, as well as technical and non-technical experts. This means that reviews of AI systems will be formed by a wide range of perspectives.

Part of the staff mindset comes from the training that Google provides. While technology may seem like a ‘neutral’ world, the training aims to increase awareness among staff that ethics and technology are actively connected.

Transparent systems is another way to ensure that the AI principles are being followed. The tech giant released Explainable AI, a way for users to see the connection between the initial data set and the conclusion that the smart system decided upon.

This allows for human users to be more clear on the workings of the AI. Tools such as Explainable AI allow data scientists to “build models with confidence, and provide human-understandable explanations”, wrote Stefan Hoejmose, Head of Data Journeys at Sky.

Consultation with ethics groups


Google chose not to tackle responsible AI alone. They consulted ethics groups and human rights organisations in order to ensure best practices are maintained. One example of this is Google’s Celebrity Recognition programme.

The AI service was created to help media outlets search through photos and videos and identify celebrities' faces. To ensure this system remained ethical, the non-profit organisation Business for Social Responsibility conducted an assessment of the system’s impact on human rights.

The Business for Social Responsibility offers advice and expertise in order to ensure responsible behaviour and practices. The organisation uses the UN’s Guiding Principles on Business and Human Rights, a framework of ethical business practices, as its basis.

Google then implemented two guidelines to its Celebrity Recognition technology. First it created a vetting process so only appropriate media organisations could access the tool, preventing misuse by unknown users.

Second, ethics committees created a list of celebrities that the technology could recognise. Google ensures that users of the tool are not able to change it, and offers all celebrities the choice to opt-out of this list.

While the potential of AI is clear, it is worth noting that this potential applies to positive and negative outcomes. For a beneficial AI system, organisations must implement guidelines that target unjust outcomes and be unafraid of accountability.

The region's leading AI innovation forum AI x GOV, powered by GovInsider, is happening on 27 July. Register here.