AI caught a hidden problem in one patient’s heart. Can it work for others?

Somewhere in Peter Maercklein’s heartbeat was an abnormality no one could find. He survived a stroke 15 years ago, but doctors never saw anything alarming on follow-up electrocardiograms. Then, one day last fall, an artificial intelligence algorithm read his EKGs and spotted something else: a ripple in the calm that indicated an elevated risk of atrial fibrillation.

Specifically, the algorithm, created by physicians at Mayo Clinic, found Maercklein had an 81.49% probability of experiencing A-fib, a quivering or irregular heartbeat that can lead to heart failure and stroke. Just days later, after Maercklein agreed to participate in a research study, a wearable Holter monitor recorded an episode of A-fib while he was walking on a treadmill.

The finding dramatically altered the course of his care. He was put on a blood thinner and eventually received a pacemaker, interventions that happen too late, or not at all, for hundreds of thousands of people who die every year of untreated heart disease.

advertisement

“I would have never known that I had A-fib,” said Maercklein, a 73-year-old retired hospital finance executive at Mayo who lives in rural Olmsted County, Minn. “For me, it worked out incredibly well. Without this study, who knows when I would have been diagnosed.”

His story encapsulates the dream of what cardiac care could become with the help of artificial intelligence, a system that plucks hidden signals from endless streams of data to anticipate — and treat — illness before it imperils patients.

advertisement

But the more AI begins to look like a miracle for one patient, the more that miracle must be questioned on behalf of all the rest. It is impossible to conclude that AI saved Maercklein’s life, or that Mayo’s EKG algorithm will reliably improve outcomes for other kinds of patients.

Even cardiologists who believe in the promise of AI are adamant that it still has a long way to go to be ready for routine care. The information it surfaces may seem intuitively beneficial. But in the context of treating serious diseases, data is as potent as a drug: It can help or it can harm. It can effectively target disease or cause complications and unnecessary care. It can reliably help white patients in the Minnesota countryside, and then fail to produce the same benefit for Black patients in New Orleans, or Hispanic patients in Los Angeles.

“As we’re able to produce ever more precise information, the pressure on us to prove that we can use that knowledge to help patients is going to be ever greater,” said Harlan Krumholz, a cardiologist and director of the Center for Outcomes Research and Evaluation at Yale University. “We’re still early in understanding how to harness this properly for the benefit of individuals.”

Mayo’s algorithm is performing a task that physicians and existing technology cannot. It is predicting which patients are at highest risk of A-fib by analyzing data on their hearts beating in normal rhythm. It is like asking a computer to pinpoint the Midwestern town most likely to get hit by a tornado based only on a review of 30 days of completely normal weather.

It is unclear what the AI is picking up on to make its predictions — an undefined digital pathology in the EKG data, or a subtle biological signal, such as an enlarged atrium, that is invisible to clinicians in the tracings. A study conducted by Mayo physicians on historical EKG data found that it correctly predicted A-fib in about 80% of patients who were verified to have the arrhythmia.

That kind of performance could make it much easier to diagnose the condition, which occurs intermittently and often for short periods, by identifying the group of people most likely to benefit from continuous monitoring. Companies such as Apple and Fitbit have spent huge sums to embed monitoring algorithms into smartwatches and other devices, but experts say the information they deliver is rarely clinically meaningful.

“The problem with the Apple Watch is that it’s sort of screening everybody and there’s such a high rate of false positives,” said Michael Rosenberg, an electrophysiologist at the University of Colorado who published a paper in Circulation, the journal of the American Heart Association, on the use of machine learning to predict A-fib and other conditions.

“Screening only adds to the amount of health care people get. That’s why you have to be careful. It only goes in one direction. ”

Michael Rosenberg, electrophysiologist, University of Colorado

Even patients with confirmed A-fib might not necessarily benefit from medication or a procedure, but the use of screening automatically means they are more likely to get it. “Screening only adds to the amount of health care people get,” Rosenberg said. “That’s why you have to be careful. It only goes in one direction.”

The question is whether Mayo’s algorithm will make care more targeted and appropriate, or simply lead to more care and higher costs. Maercklein was among hundreds enrolled in a study to test whether the technology can improve diagnosis of unrecognized A-fib by monitoring patients flagged by the algorithm. Patients deemed high and low risk by the AI were enrolled.

“We’re starting to pick up quite a few people,” said Peter Noseworthy, a Mayo Clinic electrophysiologist and a principal investigator on the study. He met with Maercklein after technicians noticed the episode of A-fib while monitoring the data from his Holter monitor in real time.

“He’s exactly the kind of patient we’re trying to find,” Noseworthy said. “He was being seen by us and at risk of A-fib, but was asymptomatic and was never in A-fib while he was here, so we never could pick it up.”

The study, which is not randomized, is just the first step in a longer, more complicated effort to incorporate AI into cardiac care at Mayo Clinic and deploy the technology beyond its walls. The hospital system has formed a joint venture company with nference, a Cambridge, Mass., AI firm, to commercialize its cardiac algorithms and AI models it may eventually develop in other specialities.

In addition to A-fib, Mayo has also developed algorithms to detect several other heart conditions, including a weak heart pump, pulmonary hypertension, and a dangerous condition known as hypertrophic cardiomyopathy, a thickening of the heart muscle that has been implicated in the sudden deaths of athletes like former Boston Celtics star Reggie Lewis.

A-fib is the most common of the conditions Mayo is currently targeting, and is expected to affect about 12.1 million Americans, or about 1 in 27, by 2030, according to the Centers for Disease Control and Prevention. For now, Mayo’s algorithm to identify those patients will remain somewhat self-contained — limited to a repository of 7 million digital EKG tracings collected from Mayo patients since the 1990s. The ability to deploy it elsewhere depends on how well the algorithm generalizes across other EKG machines, patients, and geographies, and whether hospitals have enough computing power, money, and expertise to use it effectively.

The hope is that with further testing and refinement, the AI could directly diagnose A-fib without further confirmation from a monitoring system, allowing for faster decision-making and treatment. But taking that step would require tracing the effects of AI far downstream, to individual patients in a randomized trial, and then monitoring their outcomes for years after they were flagged by the algorithm and treated with medication or a placebo.

Maercklein said the decision to begin taking a blood thinner was fairly straightforward. He’d had high blood pressure since he was in his 20s and had also developed diabetes. The prior stroke, from which he fully recovered, was also an experience he did not wish to repeat. Once the A-fib was confirmed, he and his doctor decided taking medication was the most prudent next step.

“They put me on Eliquis, and I tolerate Eliquis pretty well, so from my perspective as a patient, nothing’s changed,” Maercklein said. “You don’t really get a real feel that, hey, this is AI. It’s not like a Zoom conference with the doctor where the use of technology is obvious.”

But the AI’s invisibility to the patient belies the careful calculations clinicians have to make when using the tool. They must consider the algorithm’s finding in the context of other factors before deciding on a course of action, such as the patient’s age, family medical history, and frequency of the A-fib.

In many cases during Mayo’s trial, the algorithm has flagged a patient as high risk, but the follow-up monitoring did not confirm the presence of the arrhythmia. That creates another conundrum, said Francisco Lopez-Jimenez, a cardiologist and co-director of the AI in cardiology working group at Mayo.

“We cannot put the patients on anticoagulants (blood thinners) because of the uncertainties,” he said. He added that those cases will need to be assessed individually to determine whether there are other factors that might warrant putting a patient on blood thinners, such as transitory ischemic attacks, or mini strokes. But the inability to confirm A-fib could leave the patient in a prolonged state of anxiety, knowing that they may be at risk of a dangerous heart problem.

It was a feeling Lopez-Jimenez experienced himself recently, when one of Mayo’s algorithms detected an elevated probability of hypertrophic cardiomyopathy upon reading his EKG data.

“It was a Friday and I spent the whole weekend thinking about the possibility of suffering that,” he said. “It’s a very serious condition. People die suddenly, like football and basketball players. I run and exercise, so I was thinking, do I just do nothing?”

The next week he contacted his primary care doctor to relay his concerns. “And he said, ‘What are you talking about?’” Lopez-Jimenez recalled. He then explained the story of the AI and its finding and went in for an echocardiogram, an imaging exam that found his heart was functioning normally.

“For four or five days, I was a victim of my own work,” Lopez-Jimenez said. “That was very helpful to see that perspective and the potential downsides of using these [algorithms] without incorporating extra information.”

In addition to anxiety, false positives often result in excessive testing and unnecessary treatment that can prove harmful and costly. But those risks must be weighed against the benefit of catching potentially fatal conditions in patients who have no other way of detecting them.

Some cardiologists said the calculus on AI’s use is skewed by unrealistic expectations for its performance. Because of its automated nature, the technology is expected to be perfect by some clinicians and the broader public. An AI that drives your car, for example, is judged harshly on its one accident, not the three or four it prevented. But does a decision to disallow the AI’s use make the roads safer or more dangerous?

“There simply isn’t a way to meet some peoples’ expectations and I think it could really detract from the speed or how efficiently we begin to start to use these things,” said David Kao, a cardiologist at the University of Colorado and researcher funded by the American Heart Association’s Institute for Precision Cardiovascular Medicine. “The bar that gets set a lot of times is unreasonable for any given AI to meet.”

The debate over AI’s use in heart care will be sorted out through additional research that will put firmer numbers on the ratio of benefit to harm. Mayo’s A-fib study will carry on for several months yet before physicians can report on the number of undiagnosed patients they detected and treated, versus those who were flagged incorrectly or whose reported arrhythmia couldn’t be confirmed.

For Maercklein, who spent his career in science and math, participating in the research study based on the AI’s initial finding was one of the easiest calculations he’s ever made. “If they can pick up A-fib, it’s all the better for me. Why wouldn’t I want to join?” he said. “One of the best things about medicine now is they have all these remote monitors that can pick up more symptoms that they can jump on instead of waiting for something bad to happen — and then it gets a lot harder to fix and a lot more expensive.”

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

Source: STAT