Co-creating responsible and ethical AI with healthcare stakeholders

By Thoughtworks

Embedding ethical principles into AI development requires an integrated, multistakeholder approach, involving not only developers but also business users and customers.

Panelists at Healthcare Day highlighted the need to complement the ground-level perspective alongside theoretical frameworks to identify practical use cases for AI. Image: GovInsider

When developing artificial intelligence (AI) solutions, start from the user’s perspective, understand who is being impacted and work closely with them to build effective and safe solutions.

 

This was the consensus drawn from the panel titled Beyond Compliance: Responsible and Ethical AI in Singapore’s Public Healthcare at GovInsider Live: Healthcare Day on September 16.

 

The panellists included Health Promotion Board (HPB)’s Director of Innovation Office, Policy & Strategy Development, Terence Ng; MOH Office of Transformation (MOHT)’s Senior Data Scientist Praveen Deorani; SingHealth Duke-NUS AI in Medicine Institute (AIMI)’s Co-Director Associate Professor Liu Nan; and Thoughtworks’ Head of Technology APAC May Xu.

 

The key was to complement the ground-level perspective alongside theoretical frameworks to identify practical use cases for AI.

 

Speakers added that technology should not be the starting point and instead, the focus should be on understanding the perspective from the user’s perspective, be it the clinician or the patient, to help and build a solution that is both ethical and practical.

 

To subscribe to the GovInsider bulletin, click here

Putting ethical principles into practice

 

Public healthcare institutions can embed responsible and ethical AI principles into their daily workflows by using specific tools like threat modelling and the data ethical card practice. While the Model AI Governance Framework for Generative AI is a great start, these practices help to embed these principles into the team’s workflow.

 

For example, the data ethical card practice guides discussions on how data is used, accessed, and stored. This ensures that responsible AI is not just a theoretical concept, but is weaved into the product team's thought process.

 

Embedding these principles into technology is not just a technical problem, but a business challenge involving multiple stakeholders including other business stakeholders and end users (be it clinicians or patients).

 

Speakers highlighted the need to adopt a holistic approach to tackle the challenge of data or algorithm bias, which was a concern raised by the audience. 

 

Since bias is so complex, a comprehensive strategy is needed. This holistic approach, which covers everything from data preparation and cleaning to model building and ongoing monitoring, was key.

Trade-offs over perfection

 

The core idea is that building a perfect, 100 per cent accurate AI model isn’t always the goal. 

 

Instead, it’s about deciding how much error is acceptable for a given use case and context. Public healthcare institutions need to take a pragmatic approach that prioritises balancing different needs, such as accuracy, ethics and practicality.

 

A step-by-step evaluation of the data was important to make sure that it is representative of the real world. The panel recognised bias as an issue that is not to be fixed at only one stage, but one addressed from the initial data collection all the way through the ongoing operation of the AI system.

 

Responsible AI should be practised as a cross-functional requirement, as it affects every part of a project and not just one specific team. 

 

Another challenge was closing the gap between the government's intention to help the public and the public's perception of being tracked or monitored. Particularly for the public sector, agencies to be transparent with end users about what data is used for and how it is used with AI to build a trustworthy relationship with the public.

Ecosystem approach needed to govern GenAI

 

Singapore is taking a whole-of-government – or even a whole-of-society approach – in rolling out its national population health movement, Healthier SG.

 

In the case of Health Promotion Board (HPB), a single app with a user’s individual data may not be enough to promote lasting change as people’s health is influenced by other factors in their environment, be it school, friends, social life or national service (NS).

As AI systems expand to cover data from other sources and potential intervention points, an ecosystem approach is needed to govern AI systems. With the increasing scope of data collected to train AI, the speakers stressed the critical need for ethical boundaries and guardrails.

 

AI systems need to be designed with a clear set of limitations and a built-in safety net that ensures human intervention when it is needed.

 

They must stay within their designated scope and not give advice on topics they are not specialised in, such as medical matters. Additionally, a back-end system must be in place to redirect complex or serious issues to a human professional. 

 

This ensures the AI acts as a tool to support, not replace, human expertise, and provides a safety net for situations it is not equipped to handle.