Reproducibility of research results: the tip of the iceberg

Image: Osccarr

A previous post on this blog addressed the question of the reproducibility of research results and how this topic is attracting increasing attention in the scientific community and society at large. The reliability of scientific research can be measured by the number of subsequent retractions of previously published articles when it is determined that they are fraudulent or they contain errors of experimentation or interpretation of the results. The number of retractions has gone up tenfold since 1975, but the greater proportion of these is due to fraud. As already mentioned, irreproducibility is, however, underestimated, since repeating an experiment requires time, human and material resources and a strong reason to be suspicious about the results.

The development of new drugs for the treatment of diseases mostly have their beginnings in scientific studies carried out in research institutions working in the particular clinical field concerned. It is therefore pharmaceutical companies which are amongst the most interested parties in the reliability of these results, since it is on the basis of these that they will develop projects to test and possibly produce these drugs which have considerable costs attached to them.

However, studies carried out by the pharmaceutical companies Bayer (Germany) and Amgen (USA) concluded that between 60% and 70% of studies in the field of biomedicine may include non-reproducible results. A study carried out on clinical trials for drugs under development for certain diseases which took place in 2011 showed that the success rate for new drugs in phase ll clinical trials fell from 28% to 18% over the last few years. The most frequent cause of this lack of success is the low-level of efficacy of the drugs tested. This points to the limited predictability of new drugs in combatting diseases when there were previous indications that these drugs would be an effective treatment against them. Drugs under development come from various sources including being developed by the companies themselves, but the major source is reports found in the scientific literature which are published in journals or even presented at conferences. It is easy to imagine the consequences for pharmaceutical companies if the experiments upon which these studies are based were neither trustworthy nor reproducible. The test phase includes a change in approach from “interesting” to “feasible/marketable” since the testing of a drug that turns out to be a failure involves significant costs.

It is for this reason that pharmaceutical companies validate published results before embarking on a project involving clinical trials. Studies carried out by Bayer in Germany on the reproducibility of research results for 67 drugs being developed to treat cancer (47 drugs, 70%), cardiovascular diseases ( 8 drugs, 12%), and women’s health (12 drugs, 18%) resulted in 43 cases (65%) of inconsistencies, 14 cases (21%) in which the published results were validated, 5 cases (7%) in which the key data was reproduced, 3 cases (4%) in which some results were confirmed, and 2 cases (3%) which apply to projects whose drugs had been developed exclusively within the companies themselves. According to researchers working at Amgen in the USA, only 6 (11%) of the 53 articles on clinical oncology were reproducible.

Irreproducibility of published experiments, as we have seen, plays a significant role in the  failures recorded in projects in the pharmaceutical industry based on pre-clinical results. The evaluation system used for hiring and advancing the career of researchers, as well as obtaining research funding, is largely based on numbers of publications, preferably with positive and direct outcomes in high impact journals. In other words, “a perfect story”. This culture discourages the publication of negative results, or alternate conclusions to the premise originally proposed by the author, leading to false expectations for a new drug. Editors and peer reviewers  play an important role in changing this culture, as they should be more rigorous in their evaluation of submitted works. This will require a major commitment from the academic community to demand greater thoroughness in experiments and a more detailed description of the conditions under which the experiments are performed so that other laboratories can reproduce them. Interestingly, irreproducible studies published in journals whose Impact Factor (IF) is situated in the first quartile (greater than 20) are more cited than reproducible studies published in the same journal. This relationship is also observed in journals with an IF situated in the second quartile (5-19) but with a more pronounced bias towards irreproducible studies.

Gene therapy, part of the science called genomics, is when DNA fragments are used as a “drug” to treat diseases. This is usually done by incorporating DNA fragments into a patient’s chromosomes through the expression of a protein, encoded by the DNA sequence introduced¹. From the conception of the term in 1972 to the present day, very successful treatments for congenital diseases (where the DNA of the patient has one or more mutations) and some forms of cancer have been recorded.

Genomics is considered a “hot” discipline -hence its competitive nature – and is usually published in high IF journals, and attracts countless citations. Genomics presents itself as a totally innovative approach in the treatment of many diseases, but is also facing the impact of false positives. Often, doctors not adequately trained to interpret the results of gene sequences of their patients publish the results without due care and announce casual observations as proven biological effects.

Neuroscience is also an area of the medical sciences that suffers from a low level of reproducibility given the small sample sizes and the low statistical significance of the results. According to the authors of a detailed review article published in Nature Reviews Neurosciences², improving reproducibility depends upon established methodological principles, but which are frequently ignored by researchers.

The examples described above illustrate how many areas of knowledge are faced with a crisis of confidence in research results. Awareness on the part of researchers, editors, directors of research institutions, funding agencies and private enterprise is the first step toward the detection, correction and prevention of the problem. A possible change of attitude on the part of the scientific community from an appreciation of positive results and awards for quantity towards the quality of results could, without a doubt, contribute to the solution.

Notes

¹ Wikipedia. Gene Therapy. Available from: http://en.wikipedia.org/wiki/Gene_therapy

² BUTTON, K. S., et al. Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience. 2013, v. 14, nº 5, pp. 365-376. doi:10.1038/nrn3475

References

ARROWSMITH, J. Phase II failures: 2008–2010. Nature Reviews Drug Discovery. 2011. Available from: <http://www.nature.com/nrd/journal/v10/n5/full/nrd3439.html>

BEGLEY C. G., and ELLIS, L. M. Drug development: Raise standards for preclinical cancer research. Nature. 2012. Available from:<http://www.nature.com/nature/journal/v483/n7391/full/483531a.html>

Declaration recommends eliminate the use of Impact factor for research evaluation. SciELO in Perspective. [viewed 31 January 2014]. Available from: http://blog.scielo.org/en/2013/07/16/declaration-recommends-eliminate-the-use-of-impact-factor-for-research-evaluation/

Error prone. Editorial.  Nature. 26 July 2012. doi:10.1038/487406a

FANG, F.C., STREEN, R. G., and CASADEVALL, A. Misconduct accounts for the majority of retracted scientific publications. PNAS. 2012. Available from: <http://www.pnas.org/content/early/2012/09/27/1212247109>

JOHNSON, G. New Truth That Only One Can See. The New York Times. January 20, 2014. Available from:<http://mobile.nytimes.com/2014/01/21/science/new-truths-that-only-one-can-see.html?referrer=>

Reproducibility of research results: a subjective view. SciELO in Perspective. [viewed 24 February 2014]. Available from: http://blog.scielo.org/en/2014/02/19/reproducibility-of-research-results-a-subjective-view/

Nature. 2012. Available from: <http://www.nature.com/nature/journal/v483/n7391/full/483531a.html>

Open Access and a call to prevent the looming crisis in science. SciELO in Perspective. [viewed 31 January 2014]. Available from: http://blog.scielo.org/en/2013/07/31/open-access-and-a-call-to-prevent-the-looming-crisis-in-science/

PRINZ, F., SCHLANGE, T., and ASADULLAH, K. Believe it or not: how much can we rely on published data on potential drug targets? Nature Reviews Drug Discovery. 2011. Available from: <http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html>

 

lilianAbout Lilian Nassi-Calò

Lilian Nassi-Calò studied chemistry at Instituto de Química – USP, holds a doctorate in Biochemistry by the same institution and a post-doctorate as an Alexander von Humboldt fellow in Wuerzburg, Germany. After her studies, she was a professor and researcher at IQ-USP. She also worked as an industrial chemist and presently she is Coordinator of Scientific Communication at BIREME/PAHO/WHO and a collaborator of SciELO.

 

Translated from the original in Portuguese by Nicholas Cop Consulting.

 

Como citar este post [ISO 690/2010]:

NASSI-CALÒ, L. Reproducibility of research results: the tip of the iceberg [online]. SciELO in Perspective, 2014 [viewed ]. Available from: https://blog.scielo.org/en/2014/02/27/reproducibility-of-research-results-the-tip-of-the-iceberg/

 

One Thought on “Reproducibility of research results: the tip of the iceberg

  1. Pingback: Peer review modalities, pros and cons | Urban speeches

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation