Since 2019, government-sponsored initiatives around AI have proliferated across Asia Pacific. Such initiatives include the setting up of cross-domain AI ethics councils, guidelines and frameworks for the responsible use of AI, and other initiatives such as financial and technology support.

The majority of these initiatives builds on the country’s respective data privacy and protection acts. This is a clear sign that governments see the need to expand existing regulations when it comes to leveraging AI as a key driver for digital economies.

All initiatives to date are voluntary in nature, but there are indications already that existing data privacy and protection laws will be updated and expanded to include AI. To anticipate this, your data and technology governance initiatives must evolve now.

Data and Technology Governance must step up to embrace Ethics and Risk

Traditionally, data governance (and the governance of tech associated with data) has focused on topics such as master data management, data quality, and data retention — all primarily operational. With the rise of privacy laws and data protection acts such as the General Data Protection Regulation (GDPR) in the EU and the Personal Data Protection Act (PDPA) in Singapore, the scope of data governance has been expanded to include data privacy, personal data protection, and data sovereignty. This has shifted data governance out of the operational corner and into the spotlight of regulatory compliance and enforceable laws.

With AI being ready for prime time — that means large-scale production deployments — data and technology governance must step up again and include data and AI ethics and AI risk management.

Making Singapore a trusted, AI-enabled digital economy

If data is the new oil, then what are its Exxon Valdez and Deepwater Horizon moments?

As with environmental disasters, any major blunder of using data and AI in an unethical way will put the brands involved under extreme pressure from consumers and governments. While Singapore has so far escaped major data and AI disasters, the proliferation of AI means that it’s only a matter of time.

In 2018, an AI and ethics council initiated by the Singapore government set out to address three major risk categories for the AI-enabled digital economy envisioned for Singapore:

• Technology risk: countering data misuse and rogue AI.
• Social risk: building trust between agencies, companies, employees, and customers.
• Economic and political risk: securing Singapore’s future in a digital economy.

Ethics and social responsibility are core principles of Singapore’s Model AI Governance Framework

The framework follows two guiding principles.

The first one is to ensure that AI decision-making is explainable, transparent, and fair. Explainability, transparency, and fairness — “generally accepted AI principles” — are the foundation of ethical AI use.

Absent from the framework, however, is the notion of accountability.

The framework’s second principle is that AI solutions should be human-centric and operate for the benefit of human beings. This ties AI ethics to the larger dimension of corporate values, corporate social responsibility, and the corporate risk management framework.

A risk management approach for tackling the risks associated with deploy AI at scale

In alignment with other global frameworks, the Singapore Model AI Governance Framework recommends a risk management approach to address the technology risk associated with AI. Ideally, this would be a dimension added to corporate risk management frameworks. This will elevate the risk beyond IT and individual business units to the corporate level (following in the footsteps of cybersecurity risk).

In particular, the Model Framework recommends that organisations:

• Set up AI governance structures and measures and link them to corporate structures.
• Determine the level of human involvement with a severity-probability matrix.
• Use data and model governance for responsible AI operations.
• Set up clear, aligned communication channels and interaction policies.

Organisations must start now to look into risk management and establishing accountability chains for AI

Like cybersecurity risk before it, regulatory initiatives and consumer demand have started to join forces to drive AI risk management to the top of the corporate agenda. The key task for organisations now is to start early and build awareness internally about AI risk.

Deploying AI-enabled decision processes at scale must be accompanied by investments in governance and risk management. Guidelines such as Singapore’s Model AI Governance Framework set nonbinding recommendations, but organisations must start to develop their capabilities internally. The evolving nature of the Model Framework has added use case libraries as well as assessment tools — although adoption might still challenge all but the largest organisations.

Forrester recommends that organisations start on the following activities:

• Set up AI governance structures and measures and link them to corporate structures.
• Determine the level of human involvement with a severity probability matrix.
• Use data and model governance for responsible AI operations.
• Set up clear, aligned communication channels and interaction policies.
• Evaluate your AI supply chain and start building out accountability chains

Ultimately, understand how you can build trust with your customers, partners, and employees into your responsible use of data and AI — and turn this trust into your competitive advantage!

To understand how you can accelerate the digital future through exceptional government customer experience, download Forrester’s complimentary guide here.