AI can predict missed appointments. How can hospitals use that data for better care?

For every five appointments at Boston Children’s Hospital, one patient doesn’t show up.

Missed appointments are a common problem at health systems. And they’re a particularly attractive target for machine learning researchers, who can use patient datasets to get a handle on what’s causing patients to miss out on needed care. In new research published this month, a group of researchers at Boston Children’s crunched more than 160,000 hospital appointment records from almost 20,000 patients for clues. Their model found patients who had a history of no-shows were more likely to miss future appointments, as were patients with language barriers and those scheduled to see their provider on days with bad weather.

They’re predictions that, in theory, could help a health system target interventions to the patients at highest risk of missing their appointments and offer them whatever help they need making it in. But even though Boston Children’s leaders helped develop and test the model, the health system isn’t yet sold on taking it out of pilot mode and actually putting it into practice.

advertisement

“What I really loved about this was the thought experiment about what things might show up as actually predictive,” said Kathleen Conroy, clinical chief at Children’s Hospital Primary Care Center within Boston Children’s Hospital and a co-author of the study. “Even if they are theoretically actionable, what would you even be willing to do with that information and how would you put that through a family-centered lens?”

At first blush, it might seem reasonable to ask staff to proactively address the factors the model turned up, especially since the current protocol — overbooking appointments with an understanding that some share of patients simply won’t show up — means some already underserved patients sometimes wait longer to be seen even when they have appointments. But before adopting any new risk prediction model, the health system will first have to grapple with technical and social questions, including whether it actually improves outcomes for those underserved groups without introducing more biases.

advertisement

The health system is in the very early stages of exploring alternatives to overscheduling, including potentially putting the model into practice more broadly, researchers told STAT. But it’s a decision that would require buy-in from patients and providers, a careful training of staff, and technical support to connect the model with the hospital’s scheduling system.

If the health system does take up the model, it would have to be thoughtful about how to communicate that risk to patients in a way that doesn’t stigmatize them. Right now, all patients get phone, text, email, or patient portal reminders before appointments, depending on their preferences. Certain groups the health system deems at higher risk for missing appointments, like newborns and their families, sometimes get direct phone calls and texts from staff.

Though it can tailor outreach to particularly vulnerable patients, the health system doesn’t currently share with patients when they’re deemed at high risk of missing appointments — and if it ran with the machine learning model to offer them extra help, they’d have to be careful to ensure patients didn’t feel judged, Conroy said.

“If you ever said I was a patient who had a very high rate of a no show, I would find that offensive,” she said. The health system also doesn’t want to “penalize” patients for being late or missing appointments, which could lead them to cut ties with their care providers altogether, she said.

Conroy said that Boston Children’s leaders also need to carefully discuss any algorithms the hospital is considering implementing with its family advisory council and other groups to ensure that it doesn’t harm patients who already face hurdles getting to their appointments.

“We need to be deeply cautious about even the way we think about it in the systems we set up so we don’t accidentally create bias amongst our staff around patients who they might be reaching out to,” she said.

That’s especially important for predictive models that target patients who may already face significant barriers to their care. In the case of missed appointments, the problem doesn’t come from carelessness among patients. They might lack transportation, need to find coverage at work, or face language barriers. All are hurdles health systems might be able to help them address.

“The real goal of driving down no-shows is that we’re just providing more and better care,” said Conroy, who represents providers in decisions about how care is delivered at Boston Children’s.

Part of that work will hinge on understanding how to best help patients once they’re identified as high risk. Some of the model’s findings offer the kind of intel a hospital can take into account: If the staff knows exactly which patients are likely to skip care due to a lack of transportation, they can reach out well in advance of their appointments to coordinate rides. Other findings were far less actionable: The model found, for example, that scheduling appointments on days where the weather was likely to be nicer meant patients would be more likely to make it in.

“We can’t just write off seeing patients in bad weather in New England,” said Conroy. “Some of the stuff is more predictable and you could see it coming from further out.”

To reliably identify high-risk patients, researchers will also need to fine-tune their models. Currently, there is little consensus on how best to predict no-shows, meaning that the factors the models turn up and the patients they flag could vary depending on the types of datasets they pull from and the parameters of the model, said Dianbo Liu, the lead author on the study and a machine learning researcher at Massachusetts Institute of Technology. His team chose to exclude certain factors like race and race-related data in training the model to avoid increasing disparities, and concluded that the exclusion did not affect the prediction’s performance, though other models have included these factors. They also only had access to data from between 2015 and 2016.

Liu added that health systems haven’t historically had large troves of easily analyzable data from which to build these models, though federal interoperability rules could change that. At Boston Children’s, for instance, more than three-quarters of patients’ records had at least one data element missing; Liu’s team explored ways to indicate and account for if patients’ data was incomplete when training and running the model, which they said improved its predictive performance.

Liu urged health systems and research groups to work together on these models, potentially by adopting a statewide or global model and fine-tuning it according to their specific population.

“The people who are originally developing these systems for clinical practice aren’t the same group of people who are doing the state of the art machine learning,” Liu said. “You somehow have to make those people sit in the same room and help them discuss it.”

Source: STAT