How can governments build trust in AI?

By SAS

Data governance, privacy and leadership culture are three key steps towards this, according to a new study by MIT.

A new study by MIT has shown that leaders need to focus on three key areas to build trust in artificial intelligence: data governance; strong privacy; and a culture of analytics.

The report identified a need for greater data governance. This would ensure that the right data is used and that results are trustworthy - something that is pivotal to ensuring people trust AI-driven platforms. Ensuring that the data is of high quality allows analysts to have a strong foundation for their work.

There are opportunities for stronger data privacy, the report noted. Regulators across Asia are beginning to implement and strengthen privacy laws. These should be seen as an opportunity for businesses and governments to fundamentally improve the way they use data and build their customers’ trust.

The third key area of change is to create a culture of analytics within organisations. For instance, San Francisco is training civil servants across the government on how they can ask questions of their data. Setting this kind of analytical thinking as the culture has allowed the government to identify ways to use AI that it wasn’t aware of before.

The report shares more such practical tips and case studies on how governments and businesses have tackled AI challenges and come out successful. Fill in the form below to download the complete findings.


Image by Mark Boss on Unsplash