In 2004, Mike Rossner and Kenneth Yamada, two top editors at the Journal of Cell Biology, wrote an editorial alerting readers to what they saw as an emerging problem in science: Thanks to Photoshop, researchers could prettify the images in their manuscripts in ways that might cross the line into deception in an effort to clear the bar of peer review.
“Being accused of misconduct initiates a painful process that can disrupt one’s research and career,” they wrote, underscoring work from the U.S. Office of Research Integrity published two years earlier, when the journal had also raised the flag. “To avoid such a situation, it is important to understand where the ethical lines are drawn between acceptable and unacceptable image adjustment.”
More prescient words are hard to find.
In something that may have felt like an academic earthquake to many unfamiliar with that history, Stanford University acknowledged this week that it is investigating its president for research misconduct over concerns about the integrity of the images in at least four of his published papers.
But these cases aren’t rare. A retraction for image manipulation happens about once every other day, according to the Retraction Watch database of more than 37,000 retractions and counting. And news of the investigation into the work of Marc Tessier-Lavigne, which the Stanford Daily broke Tuesday, was anything but a surprise for habitues of PubPeer, a website which allows commenters to post their concerns about scientific articles. Critics have been flagging issues with images in Tessier-Lavigne’s articles since 2015; the publications in question there dated back to 2001.
Still, the revelation and other recent cases herald a public arrival for the awareness of image manipulation as a serious problem in science. Perhaps most salient of these is the case of Sylvain Lesné, an Alzheimer’s expert at the University of Minnesota. Several of Lesné’s studies are now being scrutinized after a whistleblower raised concerns about the images. As Science reported in July, “Some Alzheimer’s experts now suspect Lesné’s studies have misdirected Alzheimer’s research for 16 years.”
But a comment in the Science piece from Holden Thorp, the editor-in-chief of the journal, is a reminder that even some key players at the pinnacle of scholarly publishing seem to have slept through multiple alarms. Thorp said “2017 would have been [near] the beginning of when more attention was being paid to this — not just for us, but across scientific publishing.”
That kind of comment grates on Rossner, whose journal began using “digital image experts” to screen images in submitted manuscripts soon after it first accepted online submissions in 2001. “I really made it a crusade to try to educate other publishers and other journals about what we were doing and to convince them to take up the same effort,” he told STAT of his time at JCB, which he left in 2013. “Dozens did take up the effort of screening images before publication, including many of the big players.”
“What’s disappointing to me is that I’m not aware of any other publisher starting to do image pre-screening in the last 10 years,” Rossner said. Some even appear to have stopped, he added.
Instead, publishers have largely left image screening to unpaid sleuths following publication. Among those who flagged the work of Tessier-Lavigne was Elisabeth Bik, a microbiologist by training who has become one of the world’s most influential data sleuths. Back in 2015, when questions about Tessier-Lavigne’s research were emerging, Bik, then working in a Stanford lab unrelated to Tessier-Lavigne, was merely an image integrity hobbyist.
No longer. Bik has moved to the mainstream, particularly during the pandemic, when she took aim at high-profile work. She has more than 134,000 followers on Twitter, the New Yorker profiled her last year, and the New York Times published an op-ed by her last month titled: “Science Has a Nasty Photoshopping Problem.” Thanks to her preternaturally sharp eye for spotting apparent misconduct, journals have retracted close to a thousand articles.
While Bik may be the best known of the sleuths, many more work with her on similar projects or specialize in finding other problems in the literature, from plagiarism to statistical red flags that indicate either sloppiness or outright fraud.
In the meantime, researchers at various institutions have tried to automate the detection of image manipulation and duplication, sometimes using artificial intelligence — in other words, find a way to do what Bik and others do, but at massive scale. Journal publishers say they’re joining forces to share these tools, although they have yet to provide any details.
Validation of the instruments so far is lacking. “If the technology is really out there to do this well and at scale, anything that encourages more publishers to do screening of images before publication is a good thing,” Rossner said. “They just need to know that it works.”
PubPeer, which just celebrated its 10th birthday, has grown to become a community of concerned citizens of science whose critiques — about not only images but methodological and statistical rigor and any other points of concern — journals, publishers and institutions can’t ignore.
To be sure, they often try. Take the long timeline in the Stanford case. So far, the limited comments from the university suggest it is following a playbook familiar to politicians: Ignore the issue until they can’t, then say the target had little to do with the allegedly problematic work before conducting an investigation that will remain shrouded in secrecy until some point at which they hope everyone has forgotten about the matter.
And to date, none of the four papers by Tessier-Lavigne has been corrected or retracted. Science, where two of the articles in question appeared, received corrections from Tessier-Lavigne in 2015 yet failed to publish them “due to an error on our part,” Thorp said in a statement.
Bik’s take is that the photoplay in the four articles under investigation involves “beautification” rather than falsification or fabrication of data. However, she believes a fifth article by Tessie-Lavigne, published in Cell in 1999 when he was at the University of California, San Francisco, shows signs of intentional and inappropriate manipulation, as STAT has reported. Bik, who now consults with journals and others on image ethics, told STAT she “would testify in court” that the image in question was “digitally altered.”
Since leaving Rockefeller University Press, which publishes JCB, Rossner has made a full-time career consulting on image manipulation and related issues. As disappointed as he is in journals that haven’t taken up the call to screen images before publication, he said advances such as PubPeer are promising. They “provide hope that these issues are going to be dealt with post-publication,” he said.
Private industry is taking notice. A company called Proofig will screen papers for signs of image doctoring so that researchers, journals, and others in the publishing stream can minimize “the risk of costly investigations and retractions after publication.” A recent email from the company urges authors to “Avoid scientific controversy – Use Proofig.”
Left unsaid, of course, is to not manipulate images in the first place. Maybe that should be the message more often.
Adam Marcus, an editorial director at Medscape, and Ivan Oransky, editor-in-chief of Spectrum and distinguished writer in residence at New York University’s Arthur Carter Journalism Institute, are co-founders of Retraction Watch. Oransky, is a volunteer member of the board of directors of the PubPeer Foundation.