How do you stop bad scientists? Hope they committed fraud.
Misconduct vs Malpractice in Science – A Primer.
Fraud and bad science seem to be everywhere these days.
There’s the episode that led to my favorite newspaper headline of the year so far, courtesy of the Wall Street Journal: “Harvard Probe Finds Honesty Researcher Engaged in Scientific Misconduct.” The story described the release of a Harvard investigation looking into the work of “prominent researcher Francesca Gino,” which concluded that she had “manipulated data and recommended that she be fired.”
Then, as discussed in Unsettled Science, Harvard mega-star longevity researcher David Sinclair was called out on X for his history of irreproducible, yet exceedingly profitable results, in a video by the physician Matt Stanfield, reposted over 300 times. This wasn’t the first time Sinclair’s very questionable research had been publicly questioned -- Jennifer Frankel-Couzin covered it extensively in Science, in 2004 and 2011. 1
Sinclair survived and prospered, nonetheless. He’s resigned as the president of The Academy for Health & Lifespan Research, and we can hope that researchers in his field will be suitably skeptical of any subsequent work he does. But there are no signs yet that the Harvard administration sees reason either to investigate or cut him loose.
Finally, the journal Nature ran a lengthy investigative article on the latest scandal about one of the Holy Grails of physics, a material that can act as a superconductor at room temperature. The subtitle of the Nature investigation promised to explain “how institutions ignored red flags,” a particularly pertinent issue, because it was Nature itself that had published not one, but now two irreproducible claims by the same physicist, Ranga Dias of the University of Rochester. The first was in 2020, retracted in 2022, and the second in March 2023, retracted the following December. Dias is “infamous for the scandal that surrounds his work,” Nature now says; he is accused of manipulating data, plagiarizing “substantial portions of his thesis,” and attempting “to obstruct the investigation of another paper by fabricating data.” While Dias is still apparently employed at the University of Rochester, we can assume his days are numbered.
So why were Dias and Gino investigated and likely on their way out, while Sinclair, who prospered for a decade after his work was first questioned, will apparently remain gainfully employed at Harvard?
A critical difference in the fate of these researchers is whether they committed scientific misconduct – fraud, plagiarism, manipulation of data, as Gino and Dias allegedly did – or just garden-variety bad science, as did Sinclair.
Understanding the difference between these two scientific pathologies – let’s call them misconduct (fraud) and malpractice (bad science) -- is important to understanding what we do here at Unsettled Science. Nina and I are often, if not typically, writing about bad science and we’re among the few journalists who consider this their beat.
Our books and our articles constitute lengthy arguments that entire research disciplines have emerged from foundations of bad science and continue to perpetuate it; that the unacceptable standards of bad science have been passed down from generation to generation, mentors to mentees, which is why the job of challenging this bad science has been left to journalists like us. We wish other investigative journalists would also consider bad science part of their beat, but for reasons I’ll discuss, they generally don’t.
Scientific misconduct (fraud) is the flashier story – researcher caught fudging data! – and the easier one to report. But bad science -- scientific malpractice – is the far more insidious and tragic problem, particularly in fields like nutrition and chronic disease that are anything but academic when it comes to their influence on our lives.
Hence, the primer that follows.
Misconduct, a Sin of Commission
Committing fraud, as the verb implies, is a sin of commission. Researchers who commit fraud are making up or manipulating data. They are creating the evidence necessary to present the appearance of having achieved something that they haven’t--to con their peers into believe it. These researchers are not fooling themselves, as the Nobel Laureate physicist Richard Feynman famously warned is the easiest thing to do; rather they know exactly what they’re doing and are trying to fool everyone else. They’re not doing their job badly -- the job of establishing reliable information about their subject of study. They’re renouncing the obligation entirely. They have other goals.
This is why fraud is an inexcusable act in science. This is why it will prompt investigations, and why researchers who do it should be thrown out of the field. And they typically are… if they get caught.
That last caveat, of course, is the kicker. We don’t know how common fraud is in science, because it requires getting caught to tell us. Researchers with an iota of common sense will only commit fraud to give the appearance of achieving an expected result-- something mundane, if not downright boring. Their peers are unlikely to suspect fraud when confronted with such uninteresting work. Why bother?
Until recently, researchers who committed this kind of fraud would not have been caught. That’s now changing as journals – Science was the first, at the beginning of the year-- started using AI to look for fraudulent or duplicated images in articles. John Timmer, science editor of Ars Technica, describes the motivation behind this kind of low-level, uninteresting and perhaps common fraud:
Much of the image-based fraud we've seen arises from a dilemma faced by many scientists: It's not a problem to run experiments, but the data they generate often [aren't] the data you want. Maybe only the controls work, or maybe the experiments produce data that is indistinguishable from controls. For the unethical, this doesn't pose a problem since nobody other than you knows what images come from which samples. It's relatively simple to present images of real data as something they're not.
Getting caught in this kind of low-stakes deception is likely to lead to corrections or retractions, but not necessarily career cancellations. In these cases, researchers can always claim that the fraud was an accident, a one-off – “oops, my apologies” -- and move on. Only if they’re exposed as serial offenders are their careers likely to be imperiled.
The danger in committing fraud arises when the researchers publish something sufficiently revelatory that their peers will want to build on the work or compete with it – superconductivity at room temperature! That finding requires replication. Now, when researchers try to replicate the observation and fail, they need to figure out why. These cases of fraud are likely to be caught. These are also the cases that make the news.
But this kind of high-stakes fraud is almost assuredly rare. Researchers have little to gain from it other than very temporary career advancement. Entire belief systems are unlikely to be built on foundations of fraudulent results, because the first step in the scientific process – independent replication by others – will lead to exposure.
With bad science, on the other hand, the scenario plays out quite differently.
Keep reading with a 7-day free trial
Subscribe to Unsettled Science to keep reading this post and get 7 days of free access to the full post archives.