The COVID-19 Ivermectin Scandal Demonstrates How Vulnerable Science Is to Fraud
In January 2014, Haruko Obokata published two papers describing how to convert normal blood cells into pluripotent stem cells.
This was a coup at the time — it substantially simplified a previously laborious process and opened up new avenues of medical and biological research, all while neatly avoiding the bioethical concerns associated with harvesting stem cells from human embryos.
Additionally, the procedure was simple, involving the application of a weak acid solution or mechanical pressure – strangely similar to how you would wipe a rust mark off a knife.
Within a few days, scientists discovered that several of the photos in the report were out of alignment. And thus began a period of widespread distrust. Could it really be that straightforward?
Due to the simplicity of the tests and the scientists’ curiosity, attempts to reproduce the findings of the publications began quickly. They were unsuccessful. Obokata’s institute began a probe in February. By March, several of the paper’s co-authors had retracted their support for the procedures. By July, the papers had been withdrawn.
While the papers were obviously inaccurate, there was no clarity regarding the heart of the matter. Had the writers labeled a sample incorrectly? Did they stumble onto an approach that worked in the past but was intrinsically unreliable?
Had they fabricated the data? Years later, the scientific community received an approximate response when other Obokata articles were pulled for picture manipulation, data abnormalities, and other concerns.
The entire episode was a textbook example of science reversing course. A significant result has been published; it has been questioned; it has been tested, investigated, and found wanting… and then it was withdrawn.
This is the way we may expect the organized skepticism process would always function. However, this is not the case.
It is exceedingly rare for other scientists to identify flaws in the great bulk of scientific work, let alone marshal the worldwide forces of empiricism to correct them. Within academic peer review, the fundamental premise is that fraud is sufficiently infrequent or insignificant to be undeserving of a specific detection system.
Because most scientists believe they will never encounter a single instance of fraud during their careers, even the concept of double-checking numbers in peer-reviewed papers, re-running analyses, or verifying that experimental protocols were properly implemented is judged superfluous.
Worse, the raw data and analytical code that are frequently required to perform forensic analysis on a paper are not routinely published, and performing this type of stringent review is frequently viewed as a hostile act, the sort of drudge work reserved for the highly motivated or the naturally disrespectful.
Everyone is preoccupied with their own work, so what type of scrooge would go to such lengths to invalidate another’s?
Which takes us neatly to ivermectin, an anti-parasitic medicine that is being tested as a treatment for COVID-19 following positive results from lab-bench experiments in early 2020.
It gained rapid popularity following the publication and then withdrawal of an analysis by the Surgisphere group that demonstrated a significant reduction in death rates for those who take it, causing a tremendous wave of drug use across the globe.
Recent evidence for ivermectin’s efficacy has depended heavily on a single piece of study that was preprinted (that is, published without peer review) in November 2020.
This study, which used a large cohort of patients and reported a significant treatment impact, was widely read: it was mentioned in dozens of academic papers and was incorporated in at least two meta-analytic models that demonstrated ivermectin to be a “wonder drug” for COVID-19, as the authors claimed.
It is hardly an exaggeration to state that this one publication resulted in the acquisition of ivermectin for the treatment and/or prevention of COVID-19 by thousands, if not millions, of people.
The paper was retracted a few days ago amid allegations of fraud and plagiarism. A masters student tasked to read the research as part of his degree noted that the entire introduction looked to be lifted verbatim from previous scientific studies, and further investigation found that the study’s datasheet, which the authors uploaded online, featured obvious inconsistencies.
It is difficult to overestimate the magnitude of this failure for the scientific community. We proud stewards of knowledge accepted at face value a piece of study so riddled with errors that it took only a few hours for a medical student to completely deconstruct.
The seriousness with which the results were treated was in stark contrast to the study’s quality. The authors reported multiple instances of incorrect statistical tests, extremely implausible standard deviations, and a truly staggering degree of positive efficacy – the last time the medical community discovered a ’90 percent benefit’ for a drug on a disease was when antiretroviral medication was used to treat people dying of AIDS.
Nonetheless, no one noticed. For the better part of a year, serious, reputable scholars cited this paper in their evaluations, medical doctors used it as proof when treating patients, and governments cited its conclusions in public health policy.
Nobody took the five minutes required to download the data file that the authors had put online and discover that it contained information about multiple deaths that occurred prior to the study’s start. Nobody copied and pasted terms from the introduction into Google, which is all that is required to notice how much of it is identical to previously published articles.
This inattention and inaction perpetuated the narrative — when we are studiously disinterested in the issue, we also have no idea how much scientific fraud exists or where it can be easily detected or identified, and hence create no robust measures to address or mitigate its impacts.
A recent editorial in the British Medical Journal stated that it may be time to reconsider our fundamental assumptions about health research and to presume that it is false unless proven differently.
That is, rather than assuming that all researchers are dishonest, one should approach new information in health research with a fundamentally different baseline level of skepticism rather than blind faith.
This may seem excessive, but compared to the cost of accepting that millions of people would occasionally obtain prescriptions based on unvetted research that are later withdrawn entirely, it may be a minor price to pay.
James Heathers is the Chief Security Officer at Cipher Skin and a researcher on scientific integrity.