Trade-offs in medicine can be harsh, even when clearly necessary. Chemotherapy can save people from cancer, but the side effects can wreak havoc on their bodies for months or years. These trade-offs aren’t limited to the clinical world. They are also part of research.
For half a decade, I oversaw a clinical trial testing a new vaccine against the hepatitis C virus, a slowly progressing and usually silent infectious disease that damages the liver over decades, and can cause cancer and even death. The trade-off in this research meant that our team intervened to try to prevent the very hepatitis C infections that were needed for the trial to succeed.
Prevention is sorely needed. Hepatitis C infects an estimated 1.5 million people every year and kills another 300,000. If current trends continue, by 2040, hepatitis viruses could end more lives every year than HIV/AIDS, tuberculosis, and malaria combined.
The trial I led involved 548 people who inject drugs, the group at greatest risk for this blood-borne disease in the industrialized world, especially in the United States in the wake of the opioid epidemic. People who inject drugs were chosen to be in this trial because they are at the highest risk of contracting hepatitis C and would be a major target population for any new vaccine.
The concept was simple: As part of a double-blind trial, fully informed volunteers were randomly assigned to receive the experimental vaccine or a placebo. Neither the participants nor the researchers knew who got what. Then we waited for a sufficient number of hepatitis C infections to occur among the participants to achieve the statistical power needed to determine the effectiveness of the vaccine.
But the study team could not just stand by as infections happened; they were morally compelled to do their best to protect participants from harm.
This meant that the team spent hours educating participants on how to decrease the risk of infection. Team members provided participants with information on local drug treatment programs. They walked participants from trial visits directly to needle and syringe exchange programs. When a couple, both of whom were trial participants, showed up having hit rock bottom and pleading for help, a staff member spent nearly three work days contacting treatment centers to get them both placed into a rehabilitation program.
Despite our efforts, new infections occurred. Each one was a blow. We couldn’t help but feel we had failed with every new diagnosis, even knowing that they were necessary to discover whether the vaccine offered the kind of protection that might one day save millions.
The team’s interventions reduced the rate of hepatitis C virus infection such that it took six years to accumulate enough infections to reach a conclusion: the vaccine failed to provide adequate protection compared to the placebo.
This was disappointing, but not surprising. It is rare for a medication or vaccine to work perfectly as first designed. During those six years, however, millions of people around the world succumbed to this disease, and an estimated 9 million more people became infected.
Field trials among at-risk groups are impressive logistical feats, but they can’t be relied on to test hepatitis C vaccine candidates quickly. Fortunately, there is a much faster option: challenge trials, also known as the controlled human infection model. In these, healthy volunteers are given a hepatitis C vaccine and then deliberately infected with the hepatitis C virus.
Human challenge studies trace back to the world’s first vaccine: Edward Jenner inoculating an 8-year-old boy with cowpox and then exposing him to smallpox more 200 years ago. Since then, challenge trials have been indispensable in the fights against typhoid, malaria, cholera, and many other diseases.
A challenge study for a hepatitis C vaccine would recruit fully informed, freely consenting adult volunteers. After participants receive the vaccine, they would then be intentionally infected with hepatitis C virus and monitored over several weeks or months during regular check-ins. Some people will clear the virus on their own. Those that do not would be given highly effective direct-acting antivirals to cure them.
Most people with hepatitis C have no symptoms, and, because the disease is transmitted by blood, it is exceedingly difficult to spread to others in day-to-day contexts. With basic precautions, participants would be able to otherwise live and work normally during the study.
Volunteers would be compensated for their time and fully informed of the risks of participating, which would be very low. If hepatitis C infection occurs, damage to the liver would be minimal during such a short time frame. Drugs developed in recent years now have a more than 98% success rate in completely curing hepatitis C in people who do not already have cirrhosis, a late stage of the disease in which scar tissue replaces health liver tissue. If the treatment doesn’t work the first time, additional rounds are available, essentially curing close to 100% of infected people.
Despite the clear benefits of a challenge trial, which I and others outlined in the New England Journal of Medicine last year, it is far from a given that challenge trials for hepatitis C will ultimately be implemented. Regulatory approval may be the biggest hurdle, followed by potential issues in recruiting enough willing participants.
Compared to diseases like malaria, HIV, or Covid-19, hepatitis C has a lower profile in the public consciousness. This may be because it primarily affects poor and vulnerable groups worldwide. But if you believe, as I do, that all lives are worth protecting from disease, then the continued need to develop a vaccine against hepatitis C and to mount challenge trials to test them should be clear.
Andrea L. Cox is an immunologist, infectious disease specialist, director of the Medical Scientist Training Program, and professor of medicine at Johns Hopkins University. Jake Eberts, the communications director for 1DaySooner, an organization that promotes challenge trials, helped draft this essay.