You probably haven’t heard of cardiologist Don Poldermans, but experts who study scientific misconduct believe that thousands of people may be dead because of him.
Poldermans was a prolific medical researcher at Erasmus Medical Center in the Netherlands, where he analyzed the standards of care for cardiac surgery, publishing a series of definitive studies from 1999 until the early 2010s.
One crucial question he studied: Should you give patients a beta blocker, which lowers blood pressure, before certain heart surgeries? Poldermans’s research said yes. European medical guidelines (and to a lesser extent US guidelines) recommended it accordingly.
The problem? Poldermans’s data was reportedly fake. A 2012 inquiry by Erasmus Medical School, his employer, into allegations of misconduct found that he “used patient data without written permission, used fictitious data and… submitted to conferences [reports] which included knowingly unreliable data.” Poldermans admitted the allegations and apologized, while stressing that the use of fictitious data was accidental.
After the revelations, a new meta-analysis was published in 2014, evaluating whether to use beta blockers before cardiac surgery. It found that a course of beta blockers made it 27 percent more likely that someone would die within 30 days of their heart surgery. That is, the policy which Poldermans had recommended using falsified data, adopted in Europe on the basis of his research, was actually dramatically increasing the odds people would die in heart surgery.
Tens of millions of heart surgeries were conducted across the US and Europe during the years from 2009 to 2013 when those misguided guidelines were in place. One provocative analysis from cardiologists Graham Cole and Darrel Francis estimated that there were 800,000 deaths compared to if the best practices had been established five years sooner. While that exact number is hotly contested, a 27 percent increase in mortality for a common procedure for years on end can add up to an extraordinary death toll.
I learned about the Poldermans case when I reached out to some scientific misconduct researchers, asking them a provocative question: Should scientific fraud be prosecuted?
Unfortunately, fraud and misconduct in the scientific community isn’t nearly as rare as one might like to believe. We also know that the consequences of being caught are frequently underwhelming. It can take years to get a bad paper retracted, even if the flaws are readily apparent. Sometimes, scientists alleged to have falsified their data file frivolous lawsuits against their peers who point it out, further silencing anyone who would speak out about bad data. And we know that this behavior can have high stakes, and can dramatically affect treatment options for patients.
In cases where research dishonesty is literally killing people, shouldn’t it be appropriate to resort to the criminal justice system?
The question of whether research fraud should be a crime
In some cases, research misconduct may be hard to distinguish from carelessness.
If a researcher fails to apply the appropriate statistical correction for multiple hypothesis testing, they will probably get some spurious results. In some cases, researchers are heavily incentivized to be careless in these ways by an academic culture that puts non-null results above all else (that is, rewarding researchers for finding an effect even if it is not a methodologically sound one, while being unwilling to publish sound research if it finds no effect).
But I’d argue it’s a bad idea to prosecute such behavior. It would produce a serious chilling effect on research, and likely make the scientific process slower and more legalistic — which also results in more deaths that could be avoided if science moved more freely.
So the conversation about whether to criminalize research fraud tends to focus on the most clear-cut cases: intentional falsification of data. Elisabeth Bik, a scientific researcher who studies fraud, made a name for herself by demonstrating that photographs of test results in many medical journals were clearly altered. That’s not the kind of thing that can be an innocent mistake, so it represents something of a baseline for how often manipulated data is published.
While technically some scientific fraud could fall under existing statutes that prohibit lying on, say, a grant application, in practice scientific fraud is more or less never prosecuted. Poldermans eventually lost his job in 2011, but most of his papers weren’t even retracted, and he faced no further consequences.
But in response to growing awareness of fraud’s frequency and its harms, some scientists and scientific-fraud watchdogs have proposed changing that. A new statute, narrowly tailored to scientific fakery, could make it clearer where to draw the line between carelessness and fraud.
The question is whether legal consequences would actually help with our fraud problem. I asked Bik what she thought about proposals to criminalize the misconduct that she studied.
Her reaction was that, while it’s not clear whether criminalization is the right approach, people should understand that currently there are almost no consequences for wrongdoers. “It’s maddening when you see people cheat,” she told me, “And even if it involves grant money from the NIH, there’s very little punishment. Even with people who have been caught cheating, the punishment is super light. You are not eligible to apply for new grants for the next year or sometimes three years. It’s very rare that people lose jobs over it.”
Why is that? Fundamentally, it’s a problem of incentives. It is embarrassing for institutions when one of their researchers commits misconduct, so they’d rather impose a mild penalty and not keep digging. There’s little incentive for anyone to get to the bottom of misconduct. “If the most serious consequence for speeding was a police officer saying ‘Don’t do that again,’ everyone would be speeding,” Bik told me. “This is the situation we have in science. Do whatever you want. If you get caught, it’ll take years to investigate.”
In some ways, a legal statute isn’t the ideal solution. Courts are also guilty of taking years to deliver justice in complex cases. They also aren’t well suited to answering detailed scientific questions, and would almost certainly be relying on scientific institutions that conduct investigations — so what really matters is those institutions, not whether they’re attached to a court, a nonprofit, or to the NIH.
But in sufficiently severe cases of misconduct, it does seem to me that it’d be a major advantage to have an institution outside academia at work on getting to the bottom of these cases. If well designed, a statute that allowed prosecution for scientific fraud could shift the overwhelming incentives to let misconduct go unpunished and move on.
If there were ongoing investigations conducted by an outside agency (like a prosecutor), it would no longer be easiest for institutions to maintain their reputation by sweeping incidents of fraud under the rug. But the outside agency would not actually have to be a prosecutor; an independent scientific review board would probably also suffice, Bik said.
Ultimately, prosecution is a blunt tool. It might help provide accountability in cases where no one is incentivized to provide it — and I do think in cases of misconduct that lead to thousands of deaths, it would be a matter of justice. But it’s neither the only way to solve our fraud problem nor necessarily the best one.
So far, however, efforts to build institutions within the scientific community that police misconduct have had only limited success. At this point, I’d consider it a positive if there were efforts to allow external institutions to police misconduct as well.