The Algos Are Coming For You

 

Trust is a foundational value for any civilization. When you arrive at your local Kwik-E-Mart, you want to know when you hand over your dollar bill, that your 99 cent can of Arizona Iced Tea isn’t filled with battery fluid and will be as fresh and saccharine as the first time you had it. The cashier on the other side of the counter wants to be assured that your dollar bill is an actual dollar bill rather than a counterfeit one that won’t work when he has to purchase the mango chutney his wife keeps hounding him to pick up on the drive home. Without trust, any number of human to human transactions would be fraught with doubt, fear, and one too many side-eyed glares.

A few weeks ago, I looked at an arterial blood gas reading in my hospital’s electronic medical record (EMR) for a patient whose care I took over when he suddenly and strikingly became short of breath. I immediately noticed something odd. The numbers were perfect. Usually, when something is catastrophically awry with someone’s biology, an arterial blood gas will let me know, the numbers in the EMR blaring a firetruck red so that I’ll be properly alarmed by the physiologic offense. But what I read was a normal blood pH, a normal carbon dioxide, and a normal oxygen saturation. There wasn’t a hint of lactic acidosis, an organic acid that reliably spills from tissues at times of profound stress.

I looked at the patient. He was breathing forty times a minute (3 times normal) and using every last muscle of respiration to do so. He had an oxygen mask with a large reservoir of pure O2 strapped to his face. But the blood gas didn’t reflect this handicap. The numbers were perfect, but the patient was not.

What I was encountering on the screen before me felt like an algorithmic failure. I could not trust the numbers I was looking at. Instead of warning me of my patient’s impending respiratory failure, the numbers instead told me that all was ok. They weren’t inaccurate. The exact reasons for why his entire blood gas reading was within normal range is beyond a discussion here, but as an educator in a teaching hospital, I’ve encountered physicians in training get reassuringly fooled by software programs informing them that everything is hunky-dory. And it’s not only patients that can be harmed.

Medicine is in the midst of a morbid battle. The algorithms are at our heels. Physicians of all stripes are sensing the encroachment of computational pathways that purport to guide our diagnostic and therapeutic decisions, and in some cases to exceed our clinical prowess. But algorithms have been a part of medicine for decades. In fact, much of medical education has been built around an algorithmic framework for trying to solve problems. Medicine is rife with decision trees that guide physicians to the right answer as to what’s wrong with their patient and what to do about it. We’ve been trained to think algorithmically.

But I’ve always been skeptical of algorithms, whether they come from a software program or not. One example is a pair of algos that help clinicians decide how likely it is that their patient has a blood clot in their lungs. When compared to clinical gestalt, however, our gut feeling, the compendium of years of experience calcifying into a vague Spidey-sense, does better than any prediction tool.

What’s different about the current skepticism over algorithms is that it is driven by hard computer science. And those computers are threatening physician livelihoods. AI is capable of analyzing millions of data points to generate trends that would take your old family doc on the outskirts of Bloomington half a year to puzzle through. But I believe the threat of AI replacing the critical thinking of physicians has less to do with trust and more to do with a fervent fear.

The reality is that certain domains of AI will be better than physicians in particular tasks. Let’s imagine the workflow of a typical vitamin D deficient radiologist. He sits in his dark room day after day looking at thousands upon thousands of X-rays and CT scans in the course of a year. His frontal and temporal gyri are firing away at exceptional neuronal speeds. And yet he still might miss a 1 centimeter nodule on the last read of the day on the Friday before his weekend in the Bahamas.

But the AI didn’t miss it.

The same story can be repeated with pathologists, those equally sun underexposed colleagues who deal with images that can be digitized and compared to millions of other images to identify aberrations from the norm.

But not everyone is afraid. I recently asked a few future radiologists about the threat of AI to their career prospects. They all recognized that AI isn’t leaving anytime soon so they might as well get comfortable understanding and integrating the technology into their practice. I’m encouraged by that sort of humility.

Being afraid that AI will replace some tasks does not change the fact that AI will replace some tasks. Doctors would be wise to recognize what types of AI will enhance their diagnostic and therapeutic approaches to patient care and which may remain faulty. The interaction must remain bidirectional, meaning doctors must be the safety net for AI and AI must be the safety net for doctors. After all, when the patient benefits the most, with more accurate work-ups and therapies, who’s going to really complain?