How governments can turn public policy into AI programmes

By Amit Roy Choudhury

The requirement is to shoehorn public policy into the precise objective and outcome required for algorithms, writes Amit Roy Choudhury.

With its ability to crunch humongous amounts of data, Artificial Intelligence (AI) is an important tool for governments in their quest to provide citizen-centric services to all sections of society.

This has become even more apparent over the past one-and-a-half years with AI helping in the fight against the novel coronavirus pandemic, Covid-19. Well-designed AI programmes can help governments ensure that services are delivered to the segment of the population that needs them the most.

AI is also being used for a wide variety of purposes in the public sector: from regulating road traffic and providing emergency medical services, to law and order, border control, education, and service delivery. Government officials have enthusiastically taken to the use of AI, sometimes with spectacular results and stories.

As the use of AI in the public sector increases, there is an interesting contradiction between AI and public policy that needs to be tackled.

What makes a good algorithm?


Public policy, especially in democratic societies, is formed based on compromises between various competing interests. This is good for democracy, but the exact outcome of a particular policy may not be very precisely defined since it is a compromise between various possible outcomes.

This can pose a problem when writing AI algorithms to implement the policy. AI, with its neural networks, is essentially mathematical formulae. Imprecise outcomes and instructions could lead to bad algorithms.

Compounding matters is the fact that ethics has become an important part of the AI discussion. A growing fear that systemic biases could creep into AI algorithms, especially in cases where these AI networks work autonomously without human intervention, has resulted in the demand for explainability. There needs to be clear information on what the programmes are doing, why they are doing it and for whom they are doing it.

Singapore’s Model AI Governance Framework, for example, has two guiding principles which state: decisions made by AI should be “explainable, transparent and fair”; and AI systems should be human-centric (i.e. the design and deployment of AI should protect people’s interests including their safety and well-being).

There can be no compromise with the idea that AI algorithms should not have systemic bias. But this presents a problem in the sense that complex AI programmes work best in a form of a “black box”. Explainability can be a tough ask for complex AI algorithms that are hard to understand, let alone “explain” without high-level mathematics.

The challenge


So the challenge for public sector officials and engineers, who write the code, is that public policy can be imprecise in both objectives and outcomes. This has to be shoehorned into AI programmes that require very precise and clearly defined objectives and outcomes.

This is not an insurmountable problem but in the brouhaha of how wonderful AI is, it is easy to overlook. Awareness of the problem helps government officials write policy in clear and concise terms and explain to the engineers what the desired outcomes are.

What could go wrong


There is also a need to acknowledge that even the best intended AI programmes could go wrong. And this means, like in other spheres of activity, we need to learn from our mistakes.

A good example is what happened in March last year in the UK. With the country-wide school-leaving A-levels exams being cancelled due to the pandemic, the UK government regulator, Ofqual, developed an algorithm to determine the ranking of each student using existing information about student grades, schools, and other data. This grade was important because it would determine which college these students would go to.

Once the process was done it was found that approximately 40 per cent of the cohort had their grades lowered as compared with their usual averages and expected grades. This led to a nationwide outcry, largely because the methodology appeared to have exacerbated social inequality by generally favouring students at private schools at the expense of their state-sector peers.

The important lesson from this debacle was that there was no bad faith involved on the part of those who designed the algorithm; there was no deliberate intention of supporting elite private school students at the expense of their poorer peers studying in public schools. However, the law of unintended consequences caught up.

The key data used by the algorithm was the teachers’ assessment of individual students’ likely grades and ranking of each student within each subject and class. To ensure there was no bias in the teachers’ assessment of their students, the algorithm introduced a deflator which unintentionally downgraded many thousands of these results, particularly in state schools in low-income areas in comparison to elite private schools. This happened because while the general standard of students would probably be higher in the private schools, the outliers in public schools, with the potential for high scores, were inadvertently discriminated against.

The whole episode ended with the UK authorities cancelling the exercise. This incident is an important example of what could go wrong in the use of AI for quick and fast results in the public sector.

In Europe and the US, AI programmes are being used by judicial systems to predict who are likely to commit crimes when they are released from jail. Again, unless there is deliberate effort to prevent this, there is always the likelihood that prisoners from poor and disadvantaged sections of society would be penalised based on statistical probabilities.

A glimmer of hope


Despite problems, AI has the potential to do immense good and is already being used to provide some game-changing services, like credit to unbanked small businesses, medical services to marginalised communities and direct benefits transfers to people who need them the most. During the ongoing Covid-19 pandemic, marginalised sections of society in many countries have been lent a helping hand using AI.

Ultimately one needs to remember that AI can help in providing good governance but it can never be a substitute for good governance. The tool is a means to an end and it can never be the end in itself. AI must remain one among many arrows in the public sector quiver used to provide better citizen-centric services.

Amit Roy Choudhury, a media consultant, and senior journalist writes about technology for GovInsider.

The region's leading AI innovation forum AI x GOV, powered by GovInsider, is happening on 27 July. Register here.