Retracted papers are used in clinical guidelines – how worried should we be?

Retracted papers are used in clinical guidelines – how worried should we be?

Micolas/Shutterstock

In 1998, a now debunked study claimed that there was a link between the MMR vaccine and autism. The fiasco that surrounded this study eroded trust in science and was blamed for a drop in vaccination rates and a sharp increase in cases of measles.

In circumstances like this, study results can be removed from academic journals to stop the spread of untrustworthy evidence. This is called “retraction”. Retracted studies are rejected by the scientific community, and, in theory, can’t play any role in clinical or policy decision-making. Retraction can happen for a range of reasons, from scientific fraud (about 60%), to honest mistakes or not following proper ethics procedures.

We’ve seen just how damaging it can be when unreliable research results grab people’s attention, but how effective is retraction as a way of stopping that?

For retraction to be effective, we must know which research is retracted. But it can be hard to spot retracted papers. Once a paper is “out there”, it’s hard to take it off the internet, so journals often rely on publishing retraction notices reporting the new status of the paper.

The problem with this is that to know the paper is retracted, you have to see the notice. This is where Retraction Watch comes in. The passion project of two American medical journalists, Retraction Watch brings together retracted studies in one freely available database.

But does this help? A recent study investigated how often retracted studies are included in clinical practice guidelines (such as Nice guidelines in the UK) and systematic reviews (a form of research that involves systematically searching for and combining the available evidence on a topic to get a more reliable answer than using individual studies alone). Both are highly respected forms of evidence and are heavily relied on by doctors and policy decision-makers.

In this latest study, researchers in Japan looked for reviews and guidelines that included retracted randomised controlled trials (the gold standard of clinical research) from the Retraction Watch database.

Worryingly, they found 127 reviews and guidelines that cited already retracted trials without caution. And none of them corrected themselves over the following two years. They also found a further 239 that included trials that were later retracted. Of these, less than one in 20 corrected themselves.

Striking results

These results are pretty striking, and it’s alarming to imagine decisions being made based on reviews and guidelines using untrustworthy results. But how worried should we be? There are some questions we can ask.

Are the reviews and guidelines in question the kind of high-quality studies that affect clinical and policy decision-making? A 2016 study highlighted the rapid acceleration of publication of systematic reviews and meta-analyses, stating that many were of poor quality, misleading or irrelevant. They found 28,959 new reviews published in 2014 alone.

Another study estimated that the rate of publication of reviews in 2019 was twenty times that of 2000 and that in that period over 160,000 reviews were published. Compared with such large numbers, the proportion citing retracted studies is very small indeed. And we must question how likely it is that they are the high-quality reviews and guidelines that affect decision-making.

Does the inclusion of these retracted studies make a material difference to the results of those reviews or guidelines?

Reviews and guidelines often rely on a statistical method called “meta-analysis” to combine the results of studies to get a weighted average result. This can be extremely helpful for getting reliable results from lots of smaller studies that may not be very informative on their own. This also means that individual studies in a meta-analysis don’t always make a big difference, as they’ve been combined with other results.

Systematic reviewers often test the effect of removing studies that have very different results, or that they think were of lower quality, and this further decreases the risk of a retracted study having a big effect on the overall result.

Is it always a bad thing to include a retracted study? A study by Cochrane (a global non-profit group that reviews all the evidence on healthcare interventions and summarises the findings) investigated how their reviews handle retracted studies.

The authors said that it is important to carefully consider why a paper was retracted. For example, a paper may have been retracted because the researchers didn’t have permission to use the data. They concluded that a blanket policy of excluding retracted studies might bias the results of guidelines by missing out on relevant data.

While we absolutely must pay attention to the issue of retracted studies and how they’re treated, how worried we should be about their inclusion in some reviews and guidelines depends on the answers to these questions.

And as for that fraudulent MMR study, a 2005 Cochrane review excluded it because of the study design, but it was such a small study (only 12 children) that even if it had been in the review, it probably wouldn’t have affected its conclusions.

The Conversation

Jonathan Livingstone-Banks receives funding from The National Institute for Health Research (NIHR)