Can Singapore build secure and fair AI?

By Huawei

As the country takes the lead in AI governance, companies like Huawei are in a unique position to build trusted tech.

One of Singapore’s greatest strengths is that despite its size, it has an outsized effect on any business that operates in Asia.

“This is again one of Singapore’s key value propositions – we are a small, open economy. We are pro-business”, Communications and Information Minister S Iswaran said at the World Economic Forum this year. The government has had to constantly stay a step ahead to ensure it offers businesses the best possible environment.

The Minister was speaking at the launch of the country’s vision for how organisations should build and use AI. It’s the first in Asia, and the Minister was asked why other - possibly bigger - countries would want to learn from Singapore. “We are also keen to engender a rules-based, norms-based trading and economic environment globally. Therefore, when we propose some of these ideas, they tend to be seen in that context.”
 

Singapore’s AI approach


Singapore has won a prestigious award for its work on AI governance. Its fundamental principles are that the use of AI should be explainable, transparent and fair, and human interest should be the primary consideration in the design, development and use of AI.

There are three key things that stand out about its approach. The first is on how involved humans should be in the AI decisions. Singapore lays out three models with varying degrees of human oversight and risks. It’s critical for organisations to understand the level of risk they are taking on and to be prepared for it. They should periodically review the impact of the risks.



The second is in ensuring AI models are explainable. If companies cannot explain how AI systems arrive at their predictions and recommendations, customers would have a hard time trusting their systems. For instance, although the accuracy of AI cancer screens is high, doctors agree with only about half the results because they belief it lacks reasoning. This has a clear impact on whether patients trust these results.

Singapore’s AI framework suggests that counterfactuals are a powerful and simple way of explaining the decision making process. For example: “you would have been approved if your average debt was 15% lower” or “these are users with similar profiles to yours that received a different decision”.

A third area of importance is understanding the data being used to train, test and deploy AI models. Organisations must understand where their data originally came from, how it was collected and how its accuracy has been maintained over time, according to Singapore’s AI framework.

They will need to minimise biases in their data. Singapore singles out two common ones: selection bias when the data used to train the model does not fully represent the real-world decisions the model will make; and measurement bias when the data collection method itself causes the results to be skewed. Organisations can check for such systematic biases by testing their models with data from different demographic groups.
 

Embedding trust in AI


Companies like Huawei are in a unique position to create trust in frontier technologies. We are responsible for building the entire suite of hardware and software that go into AI products. Which is why last week Huawei launched a whitepaper discussing our principles for AI and detailing how we ensure security, trust and transparency.

We will invest US$2 billion over the next five years to build secure, trustworthy, and high-quality products in our ICT infrastructure business. We have signed cyber security agreements with over 3,400 suppliers worldwide and data processing agreements with over 1,300 companies.

“Today, we're probably the most open, most evaluated and transparent company in the world. We will never harm any country or any individual, and never accept any request to use Huawei products for malicious purposes. If we are ever put in such a position, we would rather close the business,” the whitepaper says.



As Singapore’s framework highlights, internal governance structures are critical to have clear roles and responsibilities for the ethical use of AI. High-risk businesses in Huawei must be approved by two high-level officials at Huawei for privacy protection. Our company’s Global Cyber Security and Privacy Officer, John Suffolk, reports directly to the CEO. He is a member of the company’s global committee responsible for managing cyber security and protecting user privacy.

We review and verify our AI systems and products to ensure they are explainable and traceable and provide evidence for this. We also continuously adjust control over AI applications based on feedback from our users on security, privacy and trustworthiness. And we continuously monitor all data activities and ensure the logs are integrated into the company’s centralised management platform.

Singapore is a hugely important hub for our work on AI, serving as a hub for regional companies to get access to the technology, and our Chinese partners to share their technical expertise with local ones.

As AI advances at a rapid rate, security and privacy protection must remain key priorities. Hear how from Huawei's Global Cyber Security & Privacy Officer John Suffolk at 10:00-10:30, 3 October at the Auditorium, Hall 406, Level 4 at GovWare 2019.

GovWare is the region’s most established premier conference and showcase for cybersecurity, and is the cornerstone event of Singapore International Cyber Week. GovWare 2019 is taking place from 1 – 3 October at Suntec Singapore Convention & Exhibition Centre. Register for the event here.