How AI is changing the future of healthcare

By Microsoft

Leading AI expert and Head of Microsoft Research’s Global Labs, Eric Horvitz, shares its implications.

In 1942, Isaac Asimov attempted to lay out a moral framework for how robots can serve humans. The science fiction writer came up with “three laws of robotics”, meant to prevent machines from harming their human creators.

This is a concept Eric Horvitz, technical fellow, Artificial Intelligence and Research and head of Microsoft Research’s Global Labs, has been studying for decades. In 2014, he set up the ‘One Hundred Year Study on Artificial Intelligence’, which will study the future of AI every five years for a century.

The project’s first report last year said that “AI-based applications could improve health outcomes and the quality of life for millions of people in the coming years”. Horvitz discusses how AI is being used in healthcare, and what we need to do to ensure people trust this technology.

1. Predicting readmissions

Microsoft built one of the first systems to predict hospital outcomes. Thousands of patients records were used to train machine learning algorithms to predict the probability that a patient will return after being discharged. “This is helpful to guide monitoring, therapy, and engagement with that patient,” Horvitz says.

The algorithms help discover the “gaps in the minds of a physician” - a pattern or trend that they may not have spotted when they cleared the patient for discharge. Getting ahead of patient re-admittance can save hospitals millions of dollars and help keep patient costs down.

2. Cutting human errors

Another way AI can help in hospitals is prevent deaths due to human errors. Over a quarter of a million patients die in US hospitals because of “preventable human error”, Horvitz says. “That’s like a very sizable city disappearing without headlines, every year in the US and your countries too.”

Microsoft Research is working with the University of Washington in Seattle to see how AI can “help doctors understand the blind spot”. AI systems can recognise anomalies in clinical best practices and remind doctors when they fall short. This could save “tens of thousands” of patients a year, he believes.
3. Managing epidemics

Disease and epidemics management is a third way AI can assist in healthcare. Microsoft ran a project to look at data sets and histories of Cholera outbreaks including by geography, land mass and weather patterns.

The results were encouraging. “We were motivated by the fact that you can go from 50% to 1% deaths if you have fresh water available,” he says. “If you knew where it was going to be, you could get water there.” This would allow international, non-profit and government organisations to plan their resources by predicting the hotspots.
4. Assisting surgeons

Finally, robots can assist in surgeries, Horvitz says. Johns Hopkins University in the US, for example, is studying the use of “computational surgery”, he says. They are teaching computers to understand the “grammar of surgery.” “If you could do this, you could build systems that could work hand in hand with humans,” he adds.

Such human-AI collaboration is just the start, he says. “Imagine this back and forth in an intellectual space - signaling who goes next and so on.” A Microsoft team in Cambridge has built a system that can be controlled from a distance by hand gestures and poses. In the future, perhaps surgeons wouldn’t even need to be in the operating theatre, but could have full control over from distance.

Trust, fairness and transparency

For many of these applications to work, “AI systems will have to work closely with care providers and patients to gain their trust”, the One Hundred Year study found. “Advances in how intelligent machines interact naturally with caregivers, patients, and patients’ families are crucial,” it adds.

Microsoft is discussing these issues through a new advisory panel on AI and Ethics in Engineering and Research, reporting directly to the CEO. The panel is coming up with policies and plans for who shouldn’t be allowed to use AI tools and what they shouldn’t be allowed for. “We will be public about the conclusions as they are rolled out over time,” Horvitz says.

As Asimov once wrote: “You did not refuse to look at danger, rather you learn how to handle it safely.”