There’s been significant investment in companies creating artificial intelligence (AI) applications for health and health care over the last decade. But while there have been successes, notably in the area of medical imaging, the industry is known more for not yet living up to its potential — think IBM Watson.
The slow pace of AI adoption in health care stems from the fact that health AI sits on the border between two large industries, health care and tech. And like the border between two nations, there are significant differences on either side.
During my career, I have spent time on each side. Now, as the CEO of a company on the border, I’ve developed a deeper understanding of the differences that create barriers to mutual innovation. For health AI to achieve its potential, health care and tech companies must keep the big picture top of mind: health care is about saving lives.
Some of the barriers between health care and tech are quite tangible. Health care is heavily regulated, tech is not. Tech makes significant use of open source software and libraries; health care tends to use proprietary software. But these differences are more like which side of the road you drive on or what currency is in use: they make crossing a border inefficient, but are ultimately solvable.
It’s the cultural differences that can be much harder to navigate.
One significant cultural difference is how each side prioritizes average benefit versus individual harm when assessing innovation. In tech, machine learning algorithms generally optimize for average benefit. The health care industry, in contrast, tends to pay greater attention to individual harm, not wanting innovation to come at the cost of worse outcomes for even a few patients. The challenge of tech and health care working together arises not because one side is wrong, but because both sides are right.
Navigating the barriers
The complexity of these cultural differences is at the root of a lesson Cornerstone AI, the company I lead, recently learned. Our largest customer has health data for more than 30 million patients that needed to be algorithmically cleaned. The customer is certainly interested in average metrics such as the net reduction in errors, the net increase in complete data, and the like, which address the overall value in the data as a whole. But the customer is equally interested in ensuring that the data is not harmed as a result of the process. even for just one of those 30 million patients. As a consequence, the AI software we built had to meet both standards, a higher bar that took significantly longer to achieve.
The cultural differences extend to how each side views software automation and algorithmic decision making. On the tech side, having a person view the prediction each time the algorithm is run may be viewed as a bad, non-scalable business model. On the health care side, having a physician review every algorithmic diagnosis may be viewed as good medical practice. Bridging this divide is essential for the growth of health AI. For example, tech companies can adopt principles from clinical trials in reporting AI results, giving more confidence in the underlying algorithms. And health care can follow tech’s lead that cloud-based and open source software are not incompatible with data security and privacy.
Here’s a personal example. When my daughter was an infant, she had a cold and developed a fever. She ended up in the hospital with a diagnosis of meningitis. There are two kinds of meningitis: viral, which is generally mild, and bacterial, which can be very serious. Distinguishing between the two takes a few days, which felt like a lifetime to her parents. The doctors recommended starting intravenous antibiotics right away because, even though antibiotics do nothing against viruses and they come with potential side effects, the risks of untreated bacterial meningitis were greater.
As data scientists, my wife and I asked what were the chances of our daughter having viral meningitis vs. bacterial meningitis. The answer we got was 50-50, so we decided to go ahead with treatment. But we then did hours of PubMed research and found a published model that could estimate this probability. We manually calculated the model’s prediction for her case and realized it estimated a 98% probability of viral meningitis vs. only 2% for bacterial. We breathed a sigh of relief.
Had this model been integrated into the hospital’s medical record system, and available instantly for the doctor to review with us, it would have put our minds much more at ease. We likely still would have gone ahead with treatment — 2% means something quite different for your baby girl than it does in an academic calculation — but others in a similar situation might not.
Decisions ultimately need a physician or parent or other human to balance the novel information from AI with the specifics of each situation. Personalized prediction should be available to all, not just those with two nerdy parents able to dedicate time to this research.
Forward progress for health AI
I share the stories of my company and my daughter to illustrate the complexities of what happens when AI models cross the border from tech to health care and see the sign “First, do no harm.”
The promise of health AI comes with the humility that those of us who develop health algorithms have learned: algorithms are only as good as the data that go into them and the humans who interpret the results.
The good news is that there’s more momentum in health AI than ever before. Health data are emerging from their proprietary, siloed systems to serve as inputs to machine learning algorithms. The tangible barriers to the fusion of tech and health care are coming down. The cultural barriers will take more time, but companies focused on health AI can help bridge them as well, with a healthy respect for what humans and AI can uniquely contribute. As is true with cultures, when people building products at this intersection respect and celebrate the contributions of each side, they will be able to achieve the full promise of health AI.
Michael Elashoff is the CEO and cofounder of Cornerstone AI.
First Opinion newsletter: If you enjoy reading opinion and perspective essays, get a roundup of each week’s First Opinions delivered to your inbox every Sunday. Sign up here.