Opinion: Scientific publishing’s new weapon for the next crisis: the rapid correction

Some observers have hailed the response to Covid-19 as a triumph for science. A hesitant triumph, perhaps. Full of missteps and recriminations, frustrations and death. Maybe more a hard-won victory.

Scientific outcomes certainly support some optimism. The world has gained a better understanding of the relevant Covid-19 disease processes, assembled a solid clinical research base for managing the disease, and produced multiple vaccines that are driving down infections and deaths.

Yet the infrastructure for producing empirical knowledge about the SARS-CoV-2 virus frequently failed, leading to potentially devastating public health consequences. Science shot itself in the foot. Not once or twice, but regularly, and painfully. It’s amazing any metatarsals remain. A big part of the problem has been the ever-widening mismatch between the speed of scientific publishing and error correction.

advertisement

The slow pace of correcting errors

Research papers published in scientific journals are both the primary source of public scientific information and the main metric for scientists’ career success. Getting a paper published can be a long, tedious process that involves peer review: a detailed assessment of the manuscript by a small number of external experts.

These experts aren’t infallible, so some published papers contain errors. These may be obvious and egregious, but usually have more consequences for other scientists than the general public — when a study about how to get children to eat more carrots is retracted, it’s mostly an issue for other researchers who have built their work off the findings. Yet even when conclusions could immediately affect public health, scientific publishing maintains an astonishingly strong resistance to modifying or correcting previously published articles. And doing nothing is the most common outcome.

advertisement

If evidence of errors does emerge, the process for correcting or withdrawing a paper tends to be alarmingly long. Late last year, for example, David Cox, the IBM director of the MIT-IBM Watson AI Lab, discovered that his name was included as an author on two papers he had never written. After he wrote to the journals involved, it took almost three months for them to remove his name and the papers themselves. In cases of large-scale research fraud, correction times can be measured in years.

Imagine now that the issue with a manuscript is not a simple matter of retracting a fraudulent paper, but a more complex methodological or statistical problem that undercuts the study’s conclusions. In this context, requests for clarification — or retraction — can languish for years. The process can outlast both the tenure of the responsible editor, resetting the clock on the entire ordeal, or the journal itself can cease publication, leaving an erroneous article in the public domain without oversight, forever.

And that was before Covid-19.

The pandemic sped up time to publication

The pandemic turbocharged the slow process of scientific publishing. A tremendous hunger for information arose, along with boundless anxiety, fear and distrust, information and disinformation, overconfidence and caution. The world was blindsided. Answers were required. Early on, it seemed like any answers would do.

Preprint servers — online portals for making public complete but non-peer-reviewed manuscripts — have been part of scientific publishing since the 1990s. In the information-hungry environment of 2020, their use greatly expanded. Likewise, many journals organized formal peer review processes to be completed in days, rather than the weeks or months typical before the pandemic, and then published studies online. Today, a report written by a single researcher or team can go viral minutes after it is released and can change health care policy the next day.

This renaissance in publishing, however, has not shifted the slow process of reconsidering or investigating published papers. The mismatch between the new speed of publication and the old glacial process for correcting errors has become extreme. This mismatch has had very real — and very dangerous — consequences.

The debacle around hydroxychloroquine is the most notorious example of this. The original report was published on a preprint server in mid-March 2020 and then appeared on the journal’s website after a single day’s peer review. Within 72 hours, researchers had raised extremely serious concerns with the study’s ethical approval process, methods, and conclusions.

The original paper was the primary force behind changes in global health policy, with millions of people around the globe being given hydroxychloroquine as a treatment for Covid-19 almost immediately.

Elsevier, the company that published the report, announced an investigation on April 11, 2020. It took three months to complete — an eon in pandemic time — and an independent review concluded that “this study suffers from major methodological shortcomings which make it nearly if not completely uninformative.”

But here’s the problem: The paper touting hydroxychloroquine has more than 4,000 citations according to Google Scholar, while the review concluding it was worthless has been cited just 38 times. Presumably, the state of Oklahoma didn’t read the review or it wouldn’t have stockpiled so much of the drug that it now can’t get rid of.

Another example: A study on pandemic-related school closures concluded that they caused more deaths than Covid-19 itself — a finding sure to grab headlines, but from an analysis tainted by a mathematical error. One of us (G.M.K.), along with a colleague, tried to get it corrected almost immediately after publication, given the impossibility of its conclusion. Yet even though we exhausted every option at our disposal for getting the error corrected, the interminable pace of academic critique meant that two months ticked by before the study was modified. The surviving paper now asserts that school closures for European children were entirely harmless, while in the U.S. they cost more years of life than most other forms of human disease combined.

In the interim, this study was featured on dozens of newscasts (many complete with long-form interviews of the authors), and was cited in official documents produced by both the World Health Organization and the European Union. The subsequent correction, as far as we can tell, had no major media coverage whatsoever.

Another example is a mathematical modeling paper that concluded that stay-at-home policies did not reduce deaths from Covid-19. The controversial finding was immensely popular, but after noting several issues with the research, one of us (G.M.K.) and several colleagues submitted letters to the journal pointing out that the conclusions may not mean very much. After two months, these critics are now revising their comments — which the peer reviewers and editors mostly agree with — for publication as letters.

Meanwhile, the horse has long since bolted from the barn. In the two months since publication, the status of stay-at-home orders has lost a great deal of relevance, as vaccine deployments reduce their necessity. It was tremendously important when published, but the academic process to correct it will last well beyond its immediate relevance.

And this is a best-case scenario. The journal editor was incredibly responsive, putting up a notice of concern on the paper almost immediately and getting back to critics within days. The report’s authors are engaged in the discussion. If this was happening at almost any other time, it would be a delightful example of academic parlay, where disagreements about evidence were hashed out over a metaphorical beer.

But it’s a pandemic. If the criticism of this report is right and the authors wrong, it’s not just an interesting side note in the annals of science but a life-and-death scenario.

And these are just some high-profile examples. One implausible preprint on using vitamin D to treat Covid-19 simply disappeared from the internet when researchers started questioning whether its authors existed at all. Other examples include everything from wild, unsupported hypotheses to potentially fraudulent studies to mathematical models that were simply wrong. The website Retraction Watch has been chronicling the deaths of these withdrawn pieces of research.

The scientific publishing and correction mismatch

This mismatch — fast publication, slow correction — has killed people. Evidence that scientists knew almost immediately to be inaccurate, unsupported, or fraudulent was frequently used to influence or change public health policy.

A phrase often used in introductory courses on the scientific method, “science is self-correcting,” is a barefaced lie. Instead, scientific criticism is devalued, ignored, or openly attacked.

The status of the correction process, the slow, grudging footnotes that provide crucial corrections to the world’s knowledge base, is built in large part from scientists who go out of their way to trigger or provoke reevaluation of published work.

The list of epithets employed for people who attempt it is long and vicious: shameless little bullies, vigilantes, the self-appointed data police, scientific McCarthyites, destructo-critics, and most famously, methodological terrorists. Given this attitude, it should come as no surprise that almost no one around the world is employed to check the quality of scientific work after it is public, as funding to do so is effectively zero.

This form of review is thankless, pedantic work which brings scant rewards; is regarded as academically and scientifically second class; and is universally treated as the purview of antisocial oddballs. Publish terrible research and it can be regarded as solid progress along one’s career path, or even rewarded. Critique such failures and you’ll be ignored at best, and mocked or even threatened, legally and otherwise, at worst.

One of us (G.M.K.) was personally attacked in an academic journal article that included outright falsehoods about him because of his Twitter critiques of published academic papers. The other (J.H.) has been threatened twice with legal action for providing authors with mathematical critiques of their work. These were not allegations of fraud or fabrication. A simplified version would proceed along the lines of “Two plus two does not equal five. Care to explain?” which might draw the response, “You’ll be hearing from my lawyers.”

This situation must change, and change quickly. Any crisis that requires scientific information in a hurry will produce hurried science, and hurried science often includes miscalculated analyses, poor experimental design, inappropriate statistical models, impossible numbers, or even fraud. Having the agility to produce and publicize work like this without having the ability to correct it just as quickly is a curiously persistent oversight in the global scientific enterprise. If corrections occur only long after the research has already been used to treat people across the world, what use are they at all?

There are some small steps in the right direction. The open-source website PubPeer aggregates formal scientific criticism, and when shoddy research makes it into the literature, hordes of critics may leave comments and questions on the site within hours. Twitter, likewise, is often abuzz with spectacular scientific critiques almost as soon as studies go up online.

But these volunteer efforts are not enough. Even when errors are glaring and obvious, the median response from academic journals is to deal with them grudgingly or not at all. Academia in general takes a faintly disapproving tone of crowd-sourced error correction, ignoring the fact that it is often the only mechanism that exists to do this vital work.

Scientific publishing needs to stop treating error-checking as a slightly inconvenient side note and make it a core part of academic research. In a perfect world, entire departmental sections would be dedicated to making sure that published research is correct and reliable. But even a few positions would be a fine start. Young researchers could be given kudos not just for every citation in their Google scholar profile but also for every post-publication review they undertake.

In the lumbering Gargantua that is academic research, it is past time for this hostile, piecemeal system of scientific publishing to be replaced by a formal approach to catching mistakes.

Maybe then we’ll be prepared for the next crisis.

Gideon Meyerowitz-Katz is an epidemiologist from the University of Wollongong in New South Wales, Australia. James Heathers is the chief scientific officer of Cipher Skin.

Source: STAT