Following a spate of recent high-profile journal retractions, some scientists are questioning the integrity of the peer-review process and how flawed articles can be published in high-impact journals. But what happens when a journal article is retracted, and is it too late to truly disregard the details it contained? Let’s take a deeper look into how and why so many scientific articles are getting retracted, and what impact this has on the spread of scientific misinformation.
Reading a journal article, it may feel like this is where the story ends. The work is done, the results are written, and it’s published for the world to see. You trust that the peer-review process has done its job, the findings are true and reliable, and you believe what the authors claim. But sometimes papers fall through the cracks and misinformation or falsified data can come into the public domain. When mistakes are realised or the lies are uncovered, the article is retracted. This year, a number of papers have sparked outrage for their controversial claims and have subsequently been retracted, but not before damage has already been done.
Facts and figures
The retraction of a journal article is the process of removing it from online or print access, either due to misconduct, false data, or issues with the peer-review process. In 2019, the publishing watchdog Retraction Watch reported that over 1,400 papers were retracted1. From stolen manuscripts to sponsored scientists, 30% of retractions were due to plagiarism or duplication of published work, and 20% due to faked peer review2. Overall, approximately 60% of retractions are a result of fraud2, often from faking or deliberately manipulating data. In fact, a 2016 study, which screened 20,621 scientific papers published in 40 journals between 1995 and 2014, found that almost 2% of them contained images that had been manipulated by the authors3.
The most shocking part is the huge number of retractions some authors accumulate during their careers. Yoshitaka Fujii from Japan currently tops the leaderboard for the greatest number of retractions at a staggering 183 papers4,5, closely followed by German anaesthesiologist Joachim Boltd5, who had 98 papers retracted after a reader spotted a suspicious image. This led to the revelation that he had been falsifying data and recommending a compound that was later found to increase the risk of death during surgery6,7. But it’s not just obscure journals and mistrusted researchers getting retractions; a 2019 paper from Nobel Prize winner Frances Arnold was retracted from Science last year after their lab was unable to replicate their initial findings.
And this problem is growing. In 2000, under 100 papers were retracted, but this had risen to over 1000 by 20142. Concerningly, the rate of retractions due to genuine errors has not even doubled, meaning a great many of these were due to deliberate wrongdoing. Although the rise could be caused by better detection of fraud in older papers, a recent surge in controversial articles suggests misconduct is still pervasive in science.
As well as faking results and issues with replication, several recent papers have been slated for their controversial topics, being labelled as racist and sexist by critics.
In June, the prestigious chemistry journal Angewandte Chemie came under fire for a now-deleted essay by Tomáš Hundlický of Brock University in Canada, which claimed that improving diversity is harming organic synthesis research. The author lamented the practice of hiring people due to their ‘social group’ rather than their qualifications, even going as far as to say that attempts to include more women in organic chemistry were ‘diminishing the contributions by men’. The paper sparked outrage, being called ‘abhorrent’ and ‘egregious’ by scientists on Twitter, and ‘pure opinion’ by Andrew Bissette, an editor at Nature. Despite claims by the journal that the article was a draft erroneously published, 16 of the 44 board members resigned over the scandal, leaving many questioning how such an essay could ever make it to peer review8.
Other papers have made equally sexist claims, often based on discriminatory underlying assumptions. A paper published in the Journal of Vascular Surgery in July this year set out to quantify the ‘unprofessional behaviour’ of young surgeons based on their social media posts. Inappropriate attire in pictures was defined by the authors as ‘pictures in underwear, provocative Halloween costumes, and provocative posing in bikinis/swimwear’, pointing to other posts about abortion laws and gun control9. The paper faced immediate backlash over what a surgeon is “meant to look like”, creating the #MedBikini hashtag and leading to the rapid retraction of the paper. Another study, published in 2013, claimed that women with a rare form of endometriosis, a gynaecological disorder where tissue resembling the lining of the uterus grows outside of the uterus, were rated as more attractive than other women. If this wasn’t awful enough, it later came to light that the participants had not consented to being rated in this way, thinking body measurements were being used for more objective analyses10. The paper was only retracted this year when this ethical violation came to light, not from the obviously sexist content.
Another paper hit the headlines recently for its controversial claims on racial discrimination. The paper entitled ‘Officer Characteristics and Racial Disparities in Fatal Officer-Involved Shootings’, published in PNAS in 201911, estimated “the role of officer characteristics in predicting the race of civilians fatally shot by police”12. From this study, the authors wrongly inferred there is no racial bias in the US police force, drawing unsubstantiated conclusions and recommending broad policies regarding the diversity of the police force. However, the data analysis they performed did not actually address these claims. Despite corrections to the paper to rectify these exaggerated conclusions, the authors called for its retraction just six days after its initial publication due to its “continued misuse” by the media. Critics also claimed its mere existence served as evidence to racists that there is no problem of racial bias in America’s police, undermining a key message of the Black Lives Matter movement.
Article retraction is therefore one approach to undo the perpetuation of unconscious biases in science. But, as Tom Welton from Imperial College London says, articles like these prove that ‘perfectly conscious bias goes on as well’. Despite publishers’ best efforts, discriminatory ideas are still being published. Retractions may play a key role in removing these provocative papers from the public record, but is it possible to wipe them from history?
The aftermath of retraction
The long-term impact of misleading information put into the public domain will be well-known to many following Andrew Wakefield’s 1998 paper in The Lancet, claiming a link between the MMR vaccine and the development of autism in children. The paper remained available for nearly 12 years, only being retracted in 2010 after receiving 633 citations and enormous amounts of media interest. Many claimed that Wakefield had received funding from a Legal Aid Board looking for evidence that the vaccine could cause autism to win compensation for affected parents13. It was finally retracted as ethical approval had not been obtained to conduct the study. The paper that pushed the anti-vaxxer movement into the mainstream media has accumulated 669 citations since it’s retraction, making it the third most cited retraction of all time14. It seems that although the publishers may no longer trust its science, other authors and the public seem less aware of this.
A more recent example of the potential danger of publishing false research is the controversy surrounding hydroxychloroquine. Earlier this year, there was much excitement over the potential benefit of the anti-malarial drug for the treatment of COVID-19. Donald Trump even claimed he was taking it to prevent him from catching the virus15. However, clinical trials were stopped short when a paper published in The Lancet in May by Chicago-based company Surgisphere claimed that hydroxychloroquine increased the risk of death from COVID-1916. The article was subsequently retracted, however, when the company failed to prove the patient data even existed. But this hasn’t stopped the spread of misinformation – in September, the French National Authority for Health changed its guidelines to advise against the use of hydroxychloroquine for COVID-19, citing The Lancet paper with no mention of its recent retraction18 instead of the growing body of evidence that it has no benefit for preventing or treating the disease17 a fact to which Trump will surely now attest.
Retraction can also have a devastating impact on the researchers caught up in the scandals. Whilst Fujii and other highly retracted authors appear to go on unphased, the stigma of retraction, even due to genuine mistakes, can do huge damage to a scientist’s reputation. A 2017 study found that authors can experience a 10% drop in citations after a formal withdrawal19. This is particularly damaging for young scientists or those reliant on collaborators who are accused of misconduct. For example, in 2019 Cambridge University researcher Steve Jackson had papers retracted from Nature and Science on the same day after a co-author admitted to fabricating the research20. In one heart-breaking case, respected scientist Yoshiki Sasai reportedly committed suicide after the retraction of a Nature paper from 2014, which falsely claimed to have found an easy way to produce stem cells21. It is clear that aside from the potential damage to public health, journal retraction can take a huge toll on those individuals intimately involved with the research under scrutiny.
Addressing the problems
Journal retraction itself is not the problem here; far from it, in fact. It is a vital process in righting the wrongs caused when erroneous research is published. What the growing number of retractions for issues surrounding discrimination, forgery, and fakery suggests, however, is that there are fundamental issues with the peer-review process that are allowing poor-quality papers to be published.
Firstly, in the wake of the COVID-19 crisis, the peer-review process has been greatly accelerated to publish critical research as soon as possible. Although editors will argue everything has been done to maintain high standards, releasing so many papers so quickly will inevitably have let some poor-quality or questionable studies slip through the net.
Others point to issues with the diversity of scientific journals, in particular their editorial teams, leading to biased reports being published. Poor quality research being published by established scientists, while young researchers and underrepresented groups struggle to get their work seen, is the ‘definition of privilege’ according to Cathleen Crudden from Queen’s University in Canada, the first board member to resign following the Angewande Chewie scandal. Therefore, diversifying the editors of journals could be critical in helping to spot unconscious and conscious bias in manuscripts. This will require greater support for women and other underrepresented groups throughout their scientific careers to ensure equal numbers of people are reaching the level of expertise needed to become a journal editor, to further promote their work and spot biased research.
“The pressure to get research published […] can […] prevent people speaking out about their own honest mistakes.”
The level of expertise needed in a field to review a paper is also being increased for journals such as The Lancet following the hydroxychloroquine scandal, with the hope of spotting falsified data or incorrect assumptions22. Other journals are bringing in statistical experts to help spot data manipulation. There have also been calls to change the rules around authorship, with all authors on a paper now needing to have seen the data to verify its validity before being published. Wider use of more stringent plagiarism detection software will also help to reduce duplications of papers and data across multiple journals.
But maybe a cultural shift is needed. The pressure to get research published to advance in your career can lead morally corrupt scientists to manipulate and falsify their data, and prevent people speaking out about their own honest mistakes. Maybe if we put less pressure on getting papers into high impact journals and focused more on the quality and relevance of research, the rates of misconduct would fall. Finally, it is the responsibility of all researchers to ensure that work we publish, or papers that we feature on, are true and honest, and free from bias. And if this is later disproved, we must communicate clearly with other researchers and the public at large about what went wrong, and what we can learn from our research being retracted.
References & further reading
Title image: adapted by Alice Gowland from a photo by Steve Johnson from Pexels.