Why Singapore’s approach to ethical AI embraces open source

By Red Hat

At the heart of Singapore’s innovative approach to testing and nurturing ethical AI – the AI Verify Foundation – is open source technology. Why? Dr Ong Chen Hui, Assistant Chief Executive of Business Technology, IMDA, and Guna Chellappan, General Manager, Red Hat, explain.

The Infocomm Media Development Authority (IMDA) is partnering with Red Hat, a provider of enterprise AI solutions, to jointly nurture responsible AI. Image: Red Hat

In June 2023, the Singapore Government’s Infocomm Media Development Authority (IMDA) announced the launch of the AI Verify Foundation, a global open source community aimed at convening global stakeholders to shape the future of international AI standards.

 

This initiative aims to nurture the development and use of AI Verify, an AI governance testing framework and software toolkit that checks AI models against ethical governance principles. AI Verify deploys technical tests, process checks, and automated reports to do so.

 

Critical to the success of the initiative will be the “collective wisdom” of the global opensource community, said Dr Ong Chen Hui, Assistant Chief Executive of Business and Technology Group at IMDA, during a recent media roundtable. With more than 90 corporate members in the Foundation to date, she shared that a vibrant community will be critical to keeping pace with AI’s growth. 

 

This is why the statutory board is partnering with Red Hat, a provider of enterprise AI solutions, to access the open source community and jointly nurture responsible AI.

1. Mitigating the risks of AI models

 

Increasingly, businesses will need testing toolkits like AI Verify to ensure their AI models are safe, fair, and transparent. 

 

Even though businesses are not building foundation models, the AI applications they might be building will still necessitate such guardrails, she shared.

 

“Businesses need a way to think about the risks that are introduced when they use foundation models. They also need to think about the risk to the business when they deploy these models,” she said. This will help them protect their customers, their business partners and their businesses.

 

And why open source?

 

Dr Ong explained that AI Verify supports a flexible architecture. This means that the open source community can plug in the right report templates and technical tests to account for the governance needs of AI applications across different domains, from healthcare to finance. 

 

Transparency in the code also plays a part. “I think it will be ironic if we're asking people to trust a toolkit that is supposed to encourage transparency in AI when the tool itself is not transparent,” she shared.

 

Drawing on her cybersecurity background, she explained that many of the existing toolkits within cybersecurity are open source, and the open source community is active in ensuring such toolkits are updated as new threats emerge.

 

Guna Chellappan, Red Hat, shared that IMDA began working with Red Hat last year on open sourcing AI Verify. They have just begun working with these communities, and Red Hat will help IMDA organise these open source projects, he explains.

2. Seat belts for development

 

AI governance toolkits, like AI Verify, are similar to seat belts, which can support many different vehicles moving at different speeds. 

 

But while the concept is applicable across the board, different automobiles might require custom seat belts to account for their unique challenges. For instance, race cars use racing harnesses, which provide more security. 

 

Likewise, AI tools will require customised tests to account for differences in application, she explains. Once the right tests are deployed to protect end-users, businesses will feel comfortable accelerating and scaling up AI innovation. 

 

Currently, businesses are largely focusing on AI tools that can improve internal productivity, such as writing co-pilots, particularly in highly regulated industries like banking. They will be more reluctant to roll out public facing applications if these tests are not accessible to them.

 

For instance, businesses will need to be able to test that their applications can account for the diversity and local contexts in the countries they operate in. Regulators may also wish to check that AI applications account for local AI regulations as well.

3. Accelerating testing sciences 

 

But right now, regulators have not yet determined what “seat belts” are required for AI – that is, what regulations are needed to safeguard AI, and which might unnecessarily restrict innovation.

 

IMDA is working on accelerating testing sciences and standards, she shared. This can help ensure that AI Verify can be used in different jurisdictions with different AI regulations.

 

“Different countries may have different regulators thinking about fairness differently… but when you look at the testing sciences, fairness means that the accuracy for one demographic should be similar to the accuracy for another. It cannot be that AI does very well for one demographic and does poorly for the rest,” said Dr Ong.

 

“There are some universal principles, such as fairness, that I believe will exist across jurisdictions, but, how they perceive each of those principles, such as fairness, may be different across jurisdiction. There's a place for science and there's also a place for regulations,” she said.

4. Limiting the concentration of AI power

 

But the question of foundation models remains – how do we check the performance of foundation models, which may be concentrated in the hands of a few large companies?

 

“It's important to ensure that the knowhow needed to build foundation models doesn't just exist in a few countries or in big companies,” she said. 

 

One approach is for countries to build their own national foundation models, she said. The UAE has released three such models.

 

These foundation models will also have to be responsive to local needs, such as Singapore’s multiculturalism. 

 

Open source foundation models, such as Meta’s LLaMA-2, may also be more available to check against governance principles, though big tech companies may still end up dominating the foundation models market due to the high barriers to entry for players.

 

Workers and end users may also play a critical role in identifying and limiting AI bias, reported GovInsider previously.