As fatal overdoses once again rise — accounting for 92,183 deaths in 2020, a 30% increase from the year before — public health researchers are racing to develop better tools to prevent them.
Some see promise in models that pull in data and spit out predictions about who is at highest risk of developing opioid use disorder or overdosing, giving health officials and physicians an idea of where to target strained prevention resources. But experts say that a scattered and siloed system to collect data on overdoses and outcomes is hamstringing efforts to further develop and deploy those models.
“These are public health datasets which were never designed for research,” said Scott Weiner, an emergency medicine physician at Brigham and Women’s Hospital in Boston and the director of the hospital’s Comprehensive Opioid Approach and Education Program. “It really is in many cases, an issue of garbage in, garbage out.”
Predictive disease models are designed to warn patients of their health risks, project disease progression, and help insurers predict costs. But turning data on opioid prescribing, substance use disorder diagnoses, and overdoses into useful models presents a unique set of challenges.
“Cancer models have the advantage that the tumor is like a law of nature. It’s like the airflow over an airplane wing or gravity,” said Benjamin Linas, an associate professor of epidemiology at Boston University School of Public Health. A tumor can be biopsied, the cells grown in a lab, and models produced from the measurable development of those cells. “There’s no analogy for opioid use disorder,” Linas said, “which doesn’t exist outside of the context of society.”
By that, he means that the many complexities of the health care system — and in particular, the treatment of opioid use disorder and the stigma still surrounding the condition — shape what data can be collected and how it can be shared. To protect patient privacy, for example, federal regulations stipulate that certain substance use disorder treatment records be kept separate from a patient’s other health care records.
“Because there’s so much stigma … we need a lot of protection around the privacy of people who have substance use disorders,” he said. “But then the other consequence of that is it hides the problem; it makes it impossible to study.”
Opioid prescriptions can provide some data, but the use of illicit drugs is far harder to track. And medical records on overdoses sometimes don’t specify the drugs involved, which would be valuable for modelers. Different hospitals also use different electronic health record systems that can’t always talk to one another, which can make collecting data on a single patient across more than one system exceedingly difficult.
“If you’re at my hospital and you have an overdose, I mean, you go across the street to [another] hospital, you have an overdose. I don’t know that unless I really specifically dig into and access their records,” Weiner said.
The data shortcomings start before a patient ever reaches the hospital. Weiner said the details collected when an ambulance crew arrives to treat a critically ill patient who has overdosed — information that could be used for public health modeling — is often understandably scattershot. Emergency services data often doesn’t include information on a patient’s past medical history, or often even the drugs involved, Weiner said.
“They’re taking care of this critically ill patient, maybe they had an overdose, they’re giving them Narcan, they’re resuscitating them. And then the ambulance has to just quickly fill out some form or put in standard of care before they have to go back out and do another call, right?” he explained.
“We’re relying on that data that’s quickly put into a computer between these two calls to then determine predictive analytics,” he said.
Taken together, those gaps have created what Weiner calls a “black hole” in valuable patient data. That makes it difficult for modelers like Muhammad Noor E Alam, an assistant professor of mechanical and industrial engineering at Northeastern University, to gather enough useful data to develop models that can actually improve treatment outcomes or help curb overdose rates.
Alam’s team at the university’s Decision Analytics Lab is working on predictive models to identify and account for the various factors that contribute to the development of opioid use disorder. The researchers are also analyzing big datasets to determine why some patients discontinue their opioid use disorder treatment.
Alam’s research uses claims data that holds information on doctors’ appointments, hospital bills, insurance coverage, and other information collected when a patient interacts with a provider.
But data restrictions, built as guardrails to protect patient privacy, also make it difficult to gather enough information to power a predictive model. ZIP codes, for instance, are often removed from datasets, Alam said. But that information is also valuable for analyzing community-level trends in opioid use disorder and overdose risk.
“So when you nail down … those who have all the information available, the patient total becomes very, very little. And so sometimes the power of your analyses diminishes,” Alam said.
Linas said the claims data that Alam and other researchers use can be valuable for certain modeling, especially given that it’s one of the more accessible types of data. But it will never line up perfectly with a patient’s actual outcomes.
“These are administrative records. We’re trying to use them as basically measures of clinical outcomes. We’re taking the track that there was a claim for an admission with opioid use disorder on it as some sort of measure of a more direct clinical outcome. And it was never intended for that purpose,” said Linas. “We do that a lot.”
As a result, while predictive opioid models have potential, their promise won’t be realized until the data itself becomes more accessible and robust. In a recent review paper published in The Lancet, experts examined the existing evidence on predictive modeling for the opioid crisis and emphasized better data is crucial to furthering the field.
“Data linkage should involve datasets that cover most major health and social agencies and a range of indicators of social functioning and disadvantage,” said Chrianna Bharat, a research associate at the National Drug and Alcohol Research Centre in Australia and co-author of the paper. Syncing those different datasets — which would take policy changes and a cultural shift in how health data is shared — would broaden the kind of work modelers could do.
Bharat and others also stressed that if predictive opioid models stand any chance of entering a clinical setting, they need to be developed alongside the health care workers who may eventually put their recommendations in action.
“I think that’s really important to ensure that they’re not only useful and practical,” Bharat said, “but that they’re understandable for [clinicians].”