Mind Matters Natural and Artificial Intelligence News and Analysis
jesper-aggergaard-495757-unsplash
Man's back with doctor
Photo by Jesper Aggergaard on Unsplash

Why AI Won’t Replace Your Doctor

Most analysts think that AI can improve medical care but cannot replace human judgement in painful situations
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A pediatric palliative care doctor reacted recently to current discussions of what AI can and can’t do in medicine: One of his infant patients was fading away, yet the most advanced techniques did not identify the cause. It wasn’t a problem of how to manage masses of information but how to manage the complete absence of desperately needed information. That’s a human skill, he cautions:

While AI may be helpful in diagnosis, unless a day comes when machines can fully replicate human thought and emotions, we should be wary of allowing AI to move beyond diagnosis and actually make medical management decisions for us. And this is not just speculation: the idea of AI engines taking over medical decision-making has been discussed in the scientific literature for decades already… and has now entered a phase where researchers are actually testing models.

Elisha Waldman, “Where AI in Medicine Falls Short” at Scientific American

In the example he offers, a boy’s parents seemed more at peace after they gave up hope, through discussions with him, for diagnosis and cure. They could accept the more realistic goal of day-to-day management with fewer hospitalization crises: “When I sit quietly with families like Peter’s, almost always more is conveyed in silence, glances and body language than in words.”

A recently published six-year collaboration between AI researchers and medical professionals attempted to specify what AI can and can’t do in the “the hectic, messy environment” of intensive care medicine:

The demonstration suggests how these systems might augment the work of hospital staff. If algorithms can track when a patient has fallen or even anticipate when someone is starting to have trouble, they can alert the staff that help is required. This could spare nurses the worry provoked by leaving one patient alone as they go on to care for another.

But what makes the study even more notable is its approach. Much AI research today focuses purely on advancing algorithms out of context, such as by fine-tuning computer vision in a simulated rather than live environment. But when dealing with sensitive applications such as health care, this can lead to algorithms that, while accurate, are unsafe to deploy or do not tackle the right problems.

Karen Hao, “A new study shows what it might take to make AI useful in health care” at Technology Review

One thinks of the widely publicized case of a man who was told he was dying via a monitor mounted on a robotic cart. The family had supposed that the cart was just “making a routine visit.”

One problem in the development of AI in medicine that Hao cites is patients’ unwillingness to share data. But they have reason for worry. A harsh reality is that medical data is becoming a shadowy market where the rights of patient are hard to specify or maybe don’t exist.

Early failures in Big Data systems have made medical professionals cautious as well:

We need to look no further than the increasing discontent with the electronic medical record – a software hoax perpetrated on the government, physician and by extension onto us all. A comprehensive review was recently published by Kaiser Health News. In addition to detailing how these systems are tearing away at the physician-patient relationship, making the physician obligated more to the capturing of data than to talking with the patient it discusses other more hidden issues – disappearing notes, orders correctly entered but not acted upon, and no accountability by the software designers or their corporate overlords.

Chuck Dinerstein, MD, “Boeing, Capt. Sullenberger & Our Relationship To Technology” at American Council on Science and Health

The review (March 18, 2019) in Kaiser Health News is titled Death by a Thousand Clicks: “The U.S. government claimed that turning American medical charts into electronic records would make health care better, safer and cheaper. Ten years and $36 billion later, the system is an unholy mess.” Fred Schulte and Erika Fry offer a look “Inside a digital revolution that took a bad turn.”

It’s not so much that electronic systems make errors as that they make errors that health care staff can’t anticipate and correct for—errors that occur in complex machinery, not errors made by experienced professionals. In one recent instance, the algorithm came up with the answer that asthmatic patients who reported breathing problems would have few complications if they were sent home. Skeptical medical staff found that the reason such patients had few complications is that they were not sent home. They were sent to the ICU, where emergency measures could be taken quickly.

Most analysts think that AI has a bright future in augmenting medical care but not in replacing human judgment in difficult situations.

Note: Elisha Waldman is the Associate Chief of the Division of Palliative Care at the Lurie Children’s Hospital of Chicago. He is also the author of This Narrow Space: A Pediatric Oncologist, His Jewish, Muslim, and Christian Patients, and a Hospital in Jerusalem.

See also: Too big to fail safe If artificial intelligence makes disastrous decisions from very complex calculations, will we still understand what went wrong?


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Why AI Won’t Replace Your Doctor