Is AI smart enough to see past its own hype?

By Andrew Greenway

Guest column by our new Associate Editor for Europe.

It is not a bold prediction to say that you will hear a lot more about AI and machine learning this year. The Financial Times reported on an AI program in London’s Imperial War Museum cracking the famously tricky Enigma code used by Germany in the Second World War.

In the 1940s, solving Enigma took hundreds of person-years and great expense. In 2017, it took the AI less than 13 minutes and cost $17. Thanks to a combination of real progress and noisy PR, these technologies are at the top of every government’s innovation agenda.

For most people working in government, the only thing bigger than the excitement at what these science fiction tools can do is the confusion about how they do it. Policy makers are rightly wary about things they don’t understand. They should be equally wary of companies who claim to know better — very few of them will.

While it’s important for governments not to get carried away, that doesn’t mean they should forget about AI until the world of business decides it has ironed out the risks. Artificial intelligence and machine learning promise a radical step-change in how countries are run and managed. Responsible governments can’t afford to be spectators to a change that might prove as big as the Internet.

Used to their full potential, AI and machine learning could completely change the shape of public services, transforming citizen outcomes for everyone. To reach that prize, governments will almost certainly need to change the whole shape of public institutions too.
 
“AI and machine learning could completely change the shape of public services.”
This is because adding some shiny AI on to an existing public service or policy really won’t help much. At best, the service will be a little slicker and cheaper than before. At worst, AI introduces a further layer of bureaucracy, complexity and confusion; the last thing most governments need.

The real benefit only arrives when officials start designing brand new services that consider what AI can do from the very beginning. Today, not many policy makers know where to start. The most successful public servants and politicians of the future will be those who have the technological and data literacy needed to turn emergent innovation into better outcomes for their citizens.

Of course, that is easier said than done.

AI in policy making


It’s 2023. Imagine the growing public outcry about elderly drivers being involved in fatal road accidents. Something must be done.

In the present day, the responsible minister would pledge to reduce the number of people killed by elderly drivers and pluck a sensible-sounding target from thin air. A policy team would then be sent away to explore options. Maximum driving ages. Annual driving tests. Mandatory vehicle safety features.

Against the clock of political pressure, the policy team then must find evidence for the option most able to meet the minister’s goal. The minister must then quickly decide if that option is politically acceptable.

Many years later — maybe by 2023 — the government will actually see some evidence about whether their policy change has had the intended effect. This being government, the evidence will support a variety of viewpoints; anything from the intervention being wildly successful to completely useless. Which story is accepted ultimately comes down to the political context.

AI, however, could change this. Let’s imagine that ministers now have the ability to incorporate real-time data from every driver in the country into a digital driving licence. If elderly drivers were apparently causing more accidents, the minister could argue for introducing digital driving licences that give conditional permission to drive.

Those conditions might be based on individuals’ health data — alcohol levels, blood pressure — accessed via the driver’s phone or smart watch. The AI would then decide if that individual was safe to drive that day. The answer could be no in the morning, and yes in the evening.

In this scenario, an AI-conditional driving licence would result in fewer accidents without impairing capable people from being able to drive. Card-based licences would disappear, insurance premiums would fall and the road death toll would drop.

Trust and accuracy


Sounds great, right? Not necessarily for those who value their privacy. Would drivers be happy with the government knowing where and how fast they are driving at all times? Is state oversight of an individual’s biometric health data really a good idea?

Then there’s the data itself. Is the quality of government data good enough to base intrusive interventions upon? There is no government (yet) that can honestly say they have the reliability needed. The output of an AI is only as good as the training data —whatever a programme initially uses to develop its algorithms, patterns and links. If the training data is flawed, harbouring errors or biases, these become baked into the AI.

For AI to work, governments will need to create a whole new level of data quality in government, and place deep data expertise in the heart of public service. There is some hard, unglamorous work ahead, readying the underlying processes, people and institutions of government for the age we’re heading towards.

The prize of AI is huge, and whoever is brave enough to begin that hard work has the chance to transform public services for the better.

AI and machine learning is coming, whatever our governments decide to do.