A deep dive on generative AI in health care, Pear sold for parts, and progress on stretchable skin

You’re reading the web edition of STAT Health Tech, our guide to how tech is transforming the life sciences. Sign up to get this newsletter delivered in your inbox every Tuesday and Thursday. 

What we know so far about generative AI in health

advertisement

My colleagues and I have been trying to get a handle on how generative AI and large language models — the technology underlying systems like ChatGPT — will meaningfully change health care. How soon, if ever, will we see doctors feeding our symptoms into a chatbot that spits out a diagnosis? How will we know if the technology they’re using has been trained on biased data?

So we asked the experts: machine learning, ethics and health professionals who are closely observing the development of generative AI and attempts to incorporate it into health care.

Here are some of the key things they think you ought to know:

advertisement

  • When they spit out an answer in response to a prompt, these models are essentially doing an advanced form of auto-complete by predicting the probability of certain words or phrases.
  • Generative AI models are getting better — and getting better fast — because they’re being trained on more and more data.
  • These models might seem smart, but they’re less intelligent than they seem. In particular, they still can’t reason like a physician would.

Read more from those experts, including perspectives from University of Michigan computer scientists Jenna Wiens and Trenton Chang, athenahealth data science senior architect Heather Lane, and Carnegie Mellon University professor Zachary Lipton.

You asked about AI, they answered

stat_ai_QsForStat_2000x1125-1600x900

We also asked you what questions you had about generative AI in health care, and took your concerns to the experts. They offered some reassurance that health care organizations testing out generative AI may feel pressured to disclose that they’re doing so, but also advised patients and providers to be vigilant about errors and bias.

The bad news: The technology may be impossible to avoid entirely, experts agreed. “Without being too alarmist, the window where everyone has the ability to completely avoid this technology is likely closing,” John Kirchenbauer, a PhD student researching machine learning and natural language processing at the University of Maryland, told STAT. Also on your list of questions: whether medical records riddled with errors will cause problems for these AI tools trained on them, and whether they run the risk of bias. Read more here.

What’s generative AI’s role in diagnosis? 

While health leaders agree generative AI could help understaffed health systems with burned out workforces handle simple communications with patients, they’re locked in a heated debate about whether the technology should creep into actual diagnoses, the founders of the General Catalyst and Andreessen Horowitz-backed startup Hippocratic AI (which I wrote about last week) told me.

Co-founders Munjal Shah, a repeat entrepreneur and computer scientist, and Meenesh Bhimani, a doctor and hospital executive, agree that it’s not currently safe to unleash the technology directly on health problems because of its tendency to hallucinate. Their venture aims to build a large language model specifically for use in medicine, pressure-tested by a team of health care professionals. And while it might one day tackle direct patient care, the team will start by focusing on applications that might ease documentation or communication burdens.

Shah said health systems focused purely on generative AI’s diagnostic potential are “struggling with imagination. They can’t think of all the other roles in health care,” he said. “I don’t think we’re really going to know for a while what all the capabilities are.”

Why high-impact, high-risk AI research gets buried

Screen Shot 2023-05-22 at 7.09.41 PM

At a recent AI conference in Boston, Atman Health chief medical officer Rahul Deo raised a potentially contentious point about AI research: that the riskiest and most impactful issues draw less attention than splashier subjects that aren’t likely to move the needle in medicine.

Models that can figure out how to automate complex physician tasks would be the most impactful, according to Deo’s hierarchy of AI research. But a lot of today’s AI research, he argues, is designed to entice journal editors rather than advance care.

“I think the field did [need] a little provocation, otherwise you just find yourself building what I consider to be the stuff on the bottom, which are just things to impress your peers with different flashy papers,” Deo told Brittany. “I think there is a tendency towards [doing] some stuff in a bubble that never has the ability to get out and actually impact anything.”

Read more here.

Pear sold for parts in $6 million auction

Late last week Mario broke news that Pear Therapeutics — a pioneering developer of prescription apps — was sold to four separate companies in following its bankruptcy declaration last monthClick TherapeuticsWelt CorpHarvest Bio, and Nox Health Group agreed to acquire parts of the company for about $6 million, a much smaller sum than Pear’s $32 million debt. Harvest Bio, which acquired assets including the company’s reSET app for substance use disorder, is run by Pear’s founder and former CEO, Corey McCann. 

ReSET received the first FDA clearance for a standalone digital therapeutic in 2017. Pear subsequently got clearance for apps treating opioid use disorder and insomnia. Read more here.How e-skin could reconstruct touch

A team of Stanford researchers are hard at work developing and improving electronic skin — technology that could mimic the sense of touch. In a paper late last week, researchers described how they used a soft, stretchable electronic skin to trigger a nerve response: more pressure or a higher temperature nudged its electronic pulses to move faster. Tested on a rat, different stimuli triggered leg twitches, researchers said. Read more on the research and its implications from Lizzy.

What we’re reading

  • Three ways to test medical AI for safety, STAT
  • Walmart plans to offer pet telehealth, CNBC
  • How chronic illness patients are cobbling together wearable data, Wired
  • How AI can be weaponized in health care, Axios
Source: STAT