AI can transform healthcare but there is a need to tread cautiously

By Amit Roy Choudhury

The potential benefits of AI in research, diagnosis, telemedicine and others are huge, but we need to tackle bias and privacy issues first.

As people start to live longer, healthcare has become a major area of attention for governments across the world. Longer lifespans result in more people suffering from lingering diseases like dementia, heart condition, and limited mobility that require constant medical attention.

An aging population also means that the percentage of working-age people goes down. In healthcare, this translates to fewer professionals available to take care of not only elderly patients, but also younger people who need medical attention.

The World Health Organisation (WHO) estimates a global shortfall of 18 million health workers by 2030. Fortunately, there have been great advances in healthcare technology and this has the potential to more than offset the fall in manpower.

The promise of tech


The use of artificial intelligence (AI) is already transforming the way healthcare is delivered. The medical sector is moving away from traditional post facto treatment, meaning one gets medical attention after contracting an ailment, to a preventive treatment regime with the use of AI-enabled diagnostic and self-help tools.

This ensures medical intervention before an incident occurs. Imagine being treated for a heart attack the day before it is supposed to happen.

Telehealth and remote monitoring are also poised to help keep patients out of hospitals and instead move the treatment to their homes. This is one of the major goals of Singapore’s Smart Nation vision.

Medical insurance


Many medical insurance companies are partnering with healthcare start-ups who use AI-driven “wellness-engines” to track data from wearable devices. This allows them to provide healthy living advice and healthcare risk reports to users.

By partnering or buying up such start-ups, health insurance providers are looking to lower claims pay-outs and provide more rationalised insurance premiums based on actual health conditions, rather than an average calculation based on age, gender, race, and existing health conditions.

For example, thanks to AI algorithms, it is now possible for an insurance company to provide different plans to two heart patients in the same age, gender and race bracket, based on who takes better care of their health as captured by data from wearables. Insurers are also experimenting with variable healthcare premiums; you pay a lower premium in a particular month if you meet or exceed certain exercise targets, like walking 10,000 steps at least four times a week.

AI-driven consumer health applications are encouraging healthier lifestyles as the data from wearables provide better visibility of health conditions to users. This allows for more proactive self-help management of health and well-being by individuals.

Fitness bands and other wearables record vital measurements like heart rate, exercise, sleep, and other parameters as often as once every few minutes throughout the day. Individually, these readings are not what would be called medical grade, meaning doctors would not depend on these readings for their treatment.

However, the sheer amount of data being collected - several thousand readings every day - can be analysed by AI programmes to arrive at a very accurate understanding of the health condition of a particular user. This health condition diagnosis based on this data, when compared with stored data within an AI programme, can produce results that are more accurate than what doctors come up with, based on bi-annual or even monthly medical check-ups.

AI in research


Apart from medical diagnostics, AI is also being used in medical research and in developing new treatment regimes. One of the wonders of modern medicine has been the swift development of vaccines for the novel coronavirus pandemic that struck the world early last year.

AI has enabled companies to cut down the normal time required to develop a vaccine from several years to less than a year in the case of the Covid-19 vaccine. The early arrival of the vaccines has potentially saved several hundred thousand lives across the world.

AI is also used in medical facilities for more mundane jobs like setting duty rosters and patient bed allotments. In Singapore, AI is used in a range of ways, from developing computer models that can predict how likely it is for patients to fall, to creating 3D holograms to assist doctors in their diagnosis and in planning for operations.

Healthcare professionals in Singapore note that AI has great potential in intensive care units to alert hospital staff to possible medical incidents by quickly detecting anomalies in a patient’s vital signs. This can help lower mortality rates in critical care patients.

There is no doubt that AI has the potential to transform the healthcare industry and bring it in line with the expectations of the 21st century environment. However, like in all other spheres, there are pitfalls in the unrestricted use of AI in healthcare that need to be taken into consideration.

As with every sector where AI is used, the algorithms are only as good as the data that is fed into it. There is also a chance of biases creeping into both the algorithms that constitute the AI, as well as the training data. Bad data and biased AI is harmful in every industry but more so in the healthcare sector as it can affect the lives and well-being of patients.

Unintended biases


Massive data needs to be fed to AI algorithms. The problem is that health data that is available for use in AI programmes can have unintended biases built into them.

It may not be so much of an issue in Singapore but in many countries, only better-off patients get good hospital care and, depending on the country, the “better off” tend to be of a certain racial type and class.

Data that is mined for AI applications could become biased towards a certain group of people. This, in turn, affects outcomes if applied to people who are outside this particular group.

As this report notes, an AI algorithm developed to diagnose malignant skin lesions from images became biased because people with white skin are more likely to suffer from skin cancer than people with darker skin tones. As a result, the training data images fed to the algorithm were loaded with pictures from lighter-skinned people since there were fewer pictures from darker-skinned patients.

As a result, the system was 11 to 19 per cent more accurate in providing the correct diagnosis for people with light skin type and was 34 per cent less accurate on darker-skinned individuals. This is a very good example of how unintended bias can creep into AI programmes.

Privacy


While the above case highlights the problems likely to be faced in the operational use of AI, there are wider privacy issues involved that could have societal ramifications. As mentioned earlier, there has already been early-stage adoption of wearables to determine health insurance premiums. Telemedicine, a new field in out-of-clinic healthcare, is also leaning heavily toward the use of wearables to remotely monitor patient health.

The use of these devices involves agreeing to the collection of personal health data, which is a highly confidential data set. There would, of course, be safeguards built into this system but it is difficult to anonymise health data.

What happens if the data is hacked? And for how long does the data stay stored? What are the different areas this data will be used? These are some of the questions whose answers have to be found.

The human touch


The final point to consider is that healthcare is a very personal touch-heavy profession. Many patients crave the reassuring presence of their doctor and being able to talk to them, ask questions and discuss their health problems, even issues that may not be relevant to the actual aliment that is being treated.

If the healthcare professional is an AI, would it be able to give the kind of assurance and confidence that a live doctor or a nurse can to an anxious patient?

Doctors or nurses are not infallible – it could be that a well-trained AI programme is better at diagnosis as it does not suffer from usual human frailties like tiredness, distraction, misjudgement, and others that can afflict even the best of medical professionals. Despite that, the human touch, a vital part of treatment, is missing from AI programmes.

All this is certainly not to say that AI is not suited for the healthcare industry. It has the potential to revolutionise the sector and bring in new and better ways of treatment.

As it is with other sectors, the application of AI will have to be a journey of discovery. It will not be just a piece of technology that is bolted onto the healthcare edifice. There will be wrong turns and twists but ultimately the path will be carved out that will result in AI-driven universal healthcare for all.

Amit Roy Choudhury, a media consultant, and senior journalist writes about technology for GovInsider.

The region's leading AI innovation forum AI x GOV, powered by GovInsider, is happening on 27 July. Register here.