Reproducibility of research results: a subjective view

Image: tpmartins

Ethical behavior in experimentation in the field of research and in scholarly communication is an essential requirement for the academic community and society as a whole. The objective of codes of ethical behavior is to protect patients, animals used in scientific experiments, and all those involved in scientific research, as well as ensuring the intellectual property rights of authors and their institutions, guaranteeing the reasonable use of the resources employed in research, employed in both publicly and privately funded research.

Learned societies, publisher associations and other entities around the world are continually drafting and updating codes of ethical behavior which are referred to by scientific journals in the instructions addressed to authors which must be adhered to in papers which are submitted for publication.

This blog has been publishing posts on topics concerning ethical behavior in scientific publishing such as plagiarism in all its various forms, as well as posts dealing with the tools which can help in its detection. The target audience is authors, editors, peer-reviewers and the academic community in general.

The topic of the reproducibility of research results has been taking up a growing amount of space in the scientific literature of the last decade. To make plagiarism more difficult to detect, there is, for example, a tendency to underestimate and underrate it, since requesting that an experiment be redone by other researchers according to the conditions described in the publication may result in outcomes different to those originally presented. In the meantime, the percentage of articles retracted on the grounds of fraud has increased tenfold since 1975.

According to the National Science Foundation¹, the term scientific misconduct² covers the following activities :

  • Fabrication – concocting results and publishing them as if they were authentic. On many occasions these are backed up with references in order to give the impression of consensus, even though they are false and do not bolster the argument ;
  • Falsification – manipulating research materials, tools, processes, modifying or omitting results which do not fit in with the author’s premise;
  • Plagiarism – appropriating ideas, processes, results or the language of others without the appropriate credit being given. Variations on this theme include self-plagiarism and ghost writing;
  • Violating ethical principles relating to patients and animals used in scientific experiments;
  • Including authors who make no meaningful contribution to the creation of the work in question or to its drafting.

Published papers can be retracted by their authors because of scientific misconduct, as in the cases described above. An article may also be retracted if and when the authors determine there was an error in the experiment or an error in the interpretation of the results after publication has taken place. However, this does not constitute fraud.

A detailed review of 2,047 retracted articles indexed in PubMed conducted in May of 2012 by Fang, Streen and Casadevall concluded that barely 21.3% were retracted because of errors, while 67.4% were retracted because of scientific misconduct, which included fraud or suspected fraud (43.4%), duplicate publication (14.2%) and plagiarism (9.8%).

According to John P.A. Ioannidis of Stanford University in the US, numerous factors can contribute to this growing wave of skepticism in research results. It is the natural role of scientists to discover something groundbreaking, however many hypotheses are put forward with a high likelihood of them being incorrect. The pressure experienced by researchers and the competitiveness amongst peers  to publish, and thus advance their careers, is bound up with the natural human tendency of people to see only what they want to see, and accounting for their biased view in this way, many times without them being aware of it, in the interpretation of results. According to the author,  who proposes a mathematical model to show that most research results are actually false, the smaller the sample and the less rigorous the standards proving statistical significance in the experimental results, the greater the probability of error.

In addition to the scientific community and academic circles, reproducibility in research results has also come to the attention of society in general. In a recent article in the US newspaper The New York Times, the journalist G. Johnson cites The Journal of Irreproducible Results³, a parody of a scientific journal that has been publishing absurd and humorous articles on research topics since 1955. The satire is based on the principle that “real” research results are indeed reproducible, a fact that is being questioned in many areas of knowledge.

It is possible, in Johnson’s view, that researchers actually do believe that their findings are true, and herein lies the problem because “the more passionate they are about their work, the more likely they are to be biased” (2014).

Given the exponential increase in scientific publications (the number has been doubling every 10 to 15 years since the XVII century), one gets an indication of the size of the problem. If an outcome is obtained only under very specific circumstances, making it difficult to reproduce in the same or another laboratory, “is it truly an advance in human knowledge?” (Johnson 2014).

Notes

¹ National Science Foundation. Available from: <www.nsf.org/>.

² Wikipedia. Scientific Misconduct . Available from: <http://en.wikipedia.org/wiki/Scientific_misconduct#Forms_of_scientific_misconduct>.

³ The Journal of Irreproducible Results – http://www.jir.com/

References

Editorial ethics: the detection of plagiarism by automated means. SciELO in Perspective. [viewed 18 February 2014]. Available from: <http://blog.scielo.org/en/2014/02/12/editorial-ethics-the-detection-of-plagiarism-by-automated-means/>.

FANG, F.C., STREEN, R. G., and CASADEVALL, A. Misconduct accounts for the majority of retracted scientific publications. PNAS. 2012. Available from: <http://www.pnas.org/content/early/2012/09/27/1212247109>.

IOANNIDIS, J. P. Why most published research findings are false. PLoS Med. 2005. Available from: <http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124>.

JOHNSON, G. New Truth That Only One Can See. The New York Times. January 20, 2014. Available from: <http://mobile.nytimes.com/2014/01/21/science/new-truths-that-only-one-can-see.html?referrer=>.

 

lilianAbout Lilian Nassi-Calò

Lilian Nassi-Calò studied chemistry at Instituto de Química – USP, holds a doctorate in Biochemistry by the same institution and a post-doctorate as an Alexander von Humboldt fellow in Wuerzburg, Germany. After her studies, she was a professor and researcher at IQ-USP. She also worked as an industrial chemist and presently she is Coordinator of Scientific Communication at BIREME/PAHO/WHO and a collaborator of SciELO.

 

Translated from the original in Portuguese by Nicholas Cop Consulting.

 

Como citar este post [ISO 690/2010]:

NASSI-CALÒ, L. Reproducibility of research results: a subjective view [online]. SciELO in Perspective, 2014 [viewed ]. Available from: https://blog.scielo.org/en/2014/02/19/reproducibility-of-research-results-a-subjective-view/

 

Leave a Reply

Your email address will not be published.

Post Navigation