The Food and Drug Administration is considering whether it needs to revamp guidance on how FDA-regulated manufacturers should deal with online misinformation about their products as the agency continues to contend with a deadly tide of falsehoods spreading on social media, Commissioner Robert Califf told STAT.
In an interview, FDA Commissioner Robert Califf said the agency is grappling with how best to handle an explosion of misinformation, including vaccine skepticism that has worsened the toll of Covid-19 pandemic. While the agency is considering whether it can take on misinformation more head-on, he said the FDA might well be able to find a clear path to better support drug, vaccine, and device makers whose products are the targets of misinformation.
“Let’s say you have a company whose product is being maligned with misinformation, that company, it seems to me, does have a right to speak back about that,” Califf said, noting that the agency’s 2014 draft guidance on how drugmakers can use social media may need to be updated to reflect the current climate. He said the FDA would still need to be on guard against “companies taking license to go well beyond combating the misinformation and [instead] promoting inappropriately.”
At the same time, Califf pushed back on the notion that the agency could put much more pressure on Twitter, Instagram, TikTok, and other sites as a way to tackle health misinformation. In January, researchers from Yale and Harvard argued in Nature that the FDA needs to take a more aggressive approach to misinformation — not just by revamping its guidance for manufacturers, but also by policing online health information and redesigning drug labels to make accurate health information even more accessible.
“The FDA has an important role in reducing the public health harm that results from health misinformation, despite it being inadequately equipped to protect consumers from this growing threat,” they urged.
But Califf suggested that neither the law nor the current political environment would permit some of those moves. He and other health experts told STAT that efforts to control health misinformation by taking aim at platforms may be on legally shaky ground — and could well risk backlash at a moment when health care and free speech are both highly politicized.
“We have a Congress which is watching carefully and will react if we overstep our bounds,” Califf said.
Medical misinformation, of course, predates the modern social media era. But there’s little doubt that today’s digital environment has supercharged the problem. Social media algorithms tend to promote content that drives views and responses, meaning they routinely amplify outrageous claims and reward influencers with huge — and highly profitable — followings. At the same time, experts in public health and government, whose communications have to be more cautious, struggle to find an audience.
“Anybody with an opinion now can reach, you know, a billion people in 10 minutes,” Califf said in a follow-up interview with STAT. “That’s just a whole different world.”
Because the largest social platforms like YouTube or Twitter also tend to show users what they’ve already liked, people who engage with misinformation may then get bombarded with increasingly extreme claims — even as they’re less likely to see context, fact-checking, or dissenting views. Major social media firms have tried to ban certain types of medical misinformation or put in place automatic fact-checks when it pops up, but enforcement is spotty. One study in 2021 found that just a small group of 20 powerful influencers had produced hundreds of thousands of anti-vaccine posts, potentially reaching nearly 60 million people.
The Nature paper points out that the FDA has expansive power to try to stop people from making misleading claims when selling products it regulates. Generally, the FDA focuses on companies making or selling fake cures and treatments. But in recent years, it has occasionally used its position to try to nudge internet infrastructure companies to take down websites devoted to such quackery.
Kushal Kadakia, a Harvard medical student and the lead author of the Nature piece, said the FDA should do more to extend this to the bigger social media sites, helping tech firms that want to remove deceptive health ads, add context to them, or make them less visible.
“There should be a backdoor between the tech companies and the FDA for addressing those situations,” Kadakia said. The FDA said it does sometimes flag posts about unapproved or fake products to digital marketplaces and social media companies, suggesting it may be open to that kind of coordination. Kadakia added that the agency, while respecting free speech, should consider whether it has the power to regulate algorithms that choose what information pops up on the web, at least insofar as the software is favoring the kinds of content the FDA would already be able to stop.
“Admittedly that’s a bit of an open question, at least when it comes to the recommendation algorithm,” he claimed. “I think certainly someone with a lot of tolerance for litigation could explore that.”
The commissioner blanched at the notion.
“Putting pressure on companies is not something that we can do unless there’s a specific law that directs us to do it,” said Califf, who served as head of medical strategy at Google’s parent company after his first stint running the agency, which ended in 2017. Experts also agree that, contrary to Kadakia’s contention, social media’s algorithms currently have the same First Amendment protections as the content they handle. The government can’t regulate them any more than it could try to dictate the selection of a newspaper’s front-page stories.
Some of the FDA’s potential efforts could find political support: The White House has suggested the law should change to hold platforms legally liable for Covid-19 misinformation. The online trade in opiates — whether fake or real — has also drawn bipartisan anger in Congress, with high-profile attempts to force the platforms to take responsibility for posts.
Ultimately, though, some experts agree with Califf that some of the further-reaching proposals for the FDA to take on misinformation are simply off the table. Samir Jain, vice president of policy at the Center for Democracy & Technology, said that while it can be useful for the government to put out authoritative information, “anytime you have the government going beyond, trying to pressure or influence private companies in terms of what speech they carry… you start crossing into dangerous territory both legally, from a First Amendment standpoint, and politically.”
Indeed, conservative politicians have been ramping up attacks on any hint that the government is clamping down on Americans’ speech by strong-arming social platforms. Last year, Republicans accused the Department of Homeland Security of squashing conservative viewpoints through an initiative called Disinformation Governance Board, which was supposed to help counter Russian cyber-operations with authoritative facts but was ultimately scrapped.
Califf said the legal “bounds” that Congress has established are one reason that the private sector and civil society need to take a lead role.
“We have to be careful that we stay within the bounds that we have, and we need support from powerful forces on the outside,” he said.
Political clashes over health misinformation extend beyond Covid treatments and vaccines. Some rightwing politicians have been making exaggerated or false claims about gender-affirming care for transgender kids and teens, even as several GOP-led states move to stop the use of such treatments.
To Califf, misinformation requires the support of professional associations, individual healthcare workers, and educational institutions — which may have more resources and more flexibility to act than a government agency.
“I love my friends at Yale,” he said, referencing the authors of the Nature paper. “I’d ask the question: What are universities doing about misinformation?”
Of course, the debate about the FDA’s role in combating misinformation isn’t all First Amendment law and the politics of social media regulation, and some experts said the FDA could take other steps that might not be as subject to pushback. One would be to update the social media guidance for drug manufacturers that Califf mentioned, which gives limited advice on issues such as posting about a medicine with set character counts.
“I’m coming to this fundamentally as a medical trainee who sees the ways in which people who do not go on the internet looking for health misinformation end up consuming information that does harm them,” said Adam Beckman, another Harvard medical student and coauthor of the Nature piece. In 2021, Beckman helped Surgeon General Vivek Murthy assemble an advisory urging social platforms to do more to remove medical misinformation.
During Califf’s tenure, the FDA has taken some steps to post scientifically sound messages, including sometimes lighthearted resources and videos. The paper authors said the agency should supplement any attempts to go viral with monitoring of social media for falsities about FDA-regulated products.
In either case, however, the agency is hardly poised to become an influencer or a comprehensive misinformation monitor. (And even the government’s own medical guidance sometimes draws the ire of experts.)
Finally, experts also argue that the FDA could modernize its approach to drug labels, which were last redesigned in 2006 and often come with a ton of technical detail in tiny font. Drug labels — and in particular, information about risks and side effects — have been misused and misunderstood in ways that allow inaccurate information to spread. Theoretically, they can reach everyday Americans right when they’re making decisions about their health. Beckman, Kadakia, and others have suggested better labels would include reader-friendly fonts, more graphics, and even QR codes that would take people to FDA-approved information sites.
“Anytime I go to pick up a prescription or even something over-the-counter, even as a medical student, I have difficulty figuring out what’s going on,” Kadakia said.
Califf said the FDA is looking into other changes to address misinformation, but the agency’s efforts are constrained by its limited budget and bandwidth. And the problem has become far bigger than regulators — or even social media platforms — can solve on their own.
Califf described how, the day before he’d discussed the Nature article with a reporter, he’d read the results of a study which found that even critical care physicians interpret medical evidence through the lens of their political beliefs or polarized media diets.
“The truth is losing the battle right now,” he conceded. The study suggested that the problem of health misinformation might be even more widespread, and thorny, than he realized, and he said it’s another reason he wants a whole-of-society approach to health misinformation.
“No one I can find believes that the FDA or even the federal government can solve this problem,” Califf said. “It can do more, and we can do a better job, but we need a vast network of people who are dedicated to the truth.”
Brittany Trang contributed reporting.