Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

Elsevier
Publish with us
Connect

How editors and reviewers can contribute to minimize waste in research

July 8, 2019 | 5 min read

By Hans Lund, Klara Brunnhuber

© istockphoto.com/bagi1998

The concept of evidence-based research

On July 19th, 2001, "the United States Office for Human Research Protections (OHRP) suspended nearly all federally-funded medical research involving human subjects at Johns Hopkins University opens in new tab/window". The reason was simple (and tragic). A 24-year-old technician, Ellen Roche, volunteered to take part in a study and died. The study was designed to provoke a mild asthma attack to help doctors discover the reflex that protects the lungs of healthy people against asthma attacks. Ms. Roche inhaled hexamethonium but became ill and was put on a ventilator. Her condition deteriorated, and she sadly died on June 2, 2001.

The price of ignorance

One of the striking conclusions of an analysis of the events opens in new tab/window (and follow up opens in new tab/window) concluded that the project leader and the Institutional Review Board failed to uncover published literature concerning the toxic effects of inhaling hexamethonium. According to the OHRP, this information was “readily available via routine MEDLINE and Internet database searches, as well as recent textbooks opens in new tab/window". The project leader only performed a standard PubMed search and consulted standard, current edition texts. In other words, neither the project leader nor the ethics committee performed a sufficiently thorough literature search and thus remained ignorant of vital material.

A lesson from the giants of history

From the very beginning of the development of modern science it has been an explicit ideal that new knowledge is based upon existing knowledge. This is often illustrated by Sir Isaac Newton’s opens in new tab/window, "If I have seen farther it is by standing on the shoulders of giants". Consider also Lord Rayleigh’s 1884 observation opens in new tab/window that "the work which deserves, but … does not always receive, the most credit is that in which discovery and explanation go hand in hand, in which not only are new facts presented, but their relation to old ones is pointed out”. To wit: each new result should be interpreted in the context of earlier research.

How to live up to this ideal, is the question. As Richard Light and David Pillemer argued 100 years after Rayleigh opens in new tab/window, researchers need to have a systematic and transparent knowledge about previous studies on the same topic. In many situations, this can be accomplished by the use of a systematic review of earlier similar studies.

The elephant in the room?

But, aren’t we just stating the obvious? Is this not already standard practice? One would think so! During the last 20-25 years a number of meta-studies have been published showing that researchers do not commonly use systematic reviews to justify new studies. In a seminal work opens in new tab/window, Karen Robinson and Steve Goodman demonstrated that less than 45% of newer studies referred to earlier similar studies. The authors also identified newer original studies that could have referred to between three and 130+ other studies (the median was two)! Other analyses (e.g. Fergusson, 2005; Robinson, 2011; Sawin, 2015) clearly indicate the same lack of systematicity when referring a reader to earlier similar studies (see figure below).

Three studies evaluated the Prior Research Citation Index (PRCI) as defined by Robinson 2011.

Three studies evaluated the Prior Research Citation Index (PRCI) as defined by Robinson 2011 opens in new tab/window. PRCI is simply the number of earlier studies that were cited divided by the number the authors could have cited. On average the three meta-research studies found that only 21% of earlier similar studies were referred to.

Potential excuses aside, a plausible interpretation of the results from meta-research is that authors seem to have little intention of referring comprehensively to earlier studies but instead rely solely on those references which reinforce the arguments they seek to advance.

Reinventing the wheel, one study at a time

By doing so, earlier studies are not used to justify the new study and references become mere "window dressing". This has been shown to lead to considerable redundancy. A 2014 investigation opens in new tab/window identified 136 new studies published after a systematic review showed that the intervention was efficient. The most surprising thing though was that 73% of the new studies referred to the systematic review showing no need for further research. Another, more concerning phenomenon that has been brought to light (e.g. Lau, 1992 opens in new tab/window; Juni, 2004 opens in new tab/window; Fergusson, 2005 opens in new tab/window; Andrade, 2013 opens in new tab/window; Clarke, 2014 opens in new tab/window; Haapakoski, 2015 opens in new tab/window) is that studies are occasionally initiated even when the knowledge available at that time indicated clearly that the treatment was effective and there was no need for further research. We dare to conclude that there is room for improvement!

Introducing the EBRNetwork

To keep exposing patients to unnecessary studies is unethical, limits the funding available for important and relevant research, and diminishes the public´s trust in research. The EBRNetwork was established in 2014 to raise awareness of this inappropriate practice and to promote a systematic and transparent approach when justifying (or even designing) new studies.

In October 2018 a new EU-funded European network called "EVBRES" opens in new tab/window was established. The aim of EVBRES is to raise awareness of the need to use systematic reviews when planning new health studies and also when placing new results in context. EVBRES is an open network, hence everyone with interest in this topic is very welcome. opens in new tab/window

The role of editors and peer review

So where do editors and reviewers fit in? In 2016 the EBRNetwork identified a number of stakeholders relevant for the concept of EBR opens in new tab/window . Among the key stakeholders were editors and reviewers, whose responsibilities would be:

  • To assess whether the rationale and design of studies are adequately described within the context of systematic reviews of prior research.

  • To evaluate whether description of earlier research is sufficient to enable interpretation of the results of submitted studies within the totality of relevant evidence.

  • To evaluate whether proposals for further research take account of earlier and ongoing research.

  • To evaluate whether proposals for further research include clear descriptions of target populations, interventions, comparisons, outcome measures, and study types.

Have your say!

Obviously, these "responsibilities" are open for discussion, hence we hereby invite everyone to give their views and suggestions. This can be done by commenting below. You can also join EVBRES opens in new tab/window or contact the EBRNetwork opens in new tab/window if you are outside Europe. We look forward to hearing from you!

Contributors

Hans-Lund

HL

Hans Lund

Klara-Brunnhuber

KB

Klara Brunnhuber

Editors' Update - supporting editors, every step of the way.

man working from home