Renata Demian Meinel, Post-graduate student of the Professonal Master’s Program in Education, Management and Diffusion of Biosciences (MP-EGeD) of the Instituto de Bioquímica Médica Leopoldo de Meis (IBqM) of the Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ, Brasil
Anna Carolina Braga de Lopes, Post-graduate student of the MP-EGeD of the IBqM of the UFRJ, Rio de Janeiro, RJ, Brazil
Raquel Cristina Vieira Spetseris, Post-graduate student of the MP-EGeD of the IBqM of the UFRJ, Rio de Janeiro, RJ, Brazil
Sonia Maria Ramos de Vasconcelos, Responsible for the Research Methodology course of the MP-EGeD and Associate Professor of the IBqM of the UFRJ, Rio de Janeiro, RJ, Brazil
When we think about science, it is common to associate it both with the generation of new knowledge and the process by which we gain a better understanding of the natural and social world. Given the importance of scientific reporting for peers to support research in the most diverse areas, and for society more broadly, transparency in such reporting is a sine qua non condition for the reliability of research activity worldwide. In Enhancing Scientific Reproducibility in Biomedical Research Through Transparent Reporting,1 the US National Academies of Sciences, Engineering, and Medicine, 2020) presents “approaches to cultivate transparent reporting in biomedical research.”
The idea of transparent reporting is strongly associated with research integrity, irrespective of field. In the words of Richard Feynman (1918–1988), “it is a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty – a kind of ‘leaning over backwards’…”2 (Feynman, 1974).
The author associates utter honesty in scientific reporting with the sharing of “all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.”2
The concept of research integrity has evolved over time, closely following the transformations in both the practice of science and the very definition of responsible research conduct (Steneck, 2006; Resnik, 2011; Vasconcelos & Marušić, 2025).
A lack of transparency, or the omission of essential data and information necessary to fully understand a research report, has become a central concern, given its direct effect on the quality of the research record. Missing information can hinder the replication of studies by other researchers, while also making it more difficult to assess the significance, potential applications, and limitations of the findings.
When research reports omit errors or inconsistencies, it becomes harder for others to detect these issues, and in fields such as health research, this lack of openness can further complicate the publication landscape.
Discussing the importance of transparent reporting in health research, Altman & Moher, in Importance of Transparent Reporting of Health Research3 (2014), described, over a decade ago, persistent deficiencies in the adherence of health research literature to principles of rigorous reporting especially of methods and results. They highlighted that systematic reviews served as a primary source of evidence that these flaws permeated health literature.
The problem included randomized controlled trials, in which crucial information for readers was absent from the reports. Initiatives such as STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) have contributed to clearer communication, for example, of observational studies.
This initiative is justified by the concern that incomplete and inadequate scientific reports make it difficult to assess the strengths and weaknesses of studies, including the generalizability of the findings (Vandenbroucke, et al., 2007). STROBE provides specific recommendations for each section of the article. For the Discussion section authors should make an objective assessment of the findings and refrain from exaggerated interpretations of the results. Another relevant point is to report potential limitations of the study, as well as any potential research bias.
In the context of qualitative social research, as Moravcsik (2014) highlights in The Transparency Revolution in Qualitative Social Science,4 social scientists in qualitative research have established more rigorous norms of transparency. Transparency is a basic requirement for evaluating the quality of qualitative work, but its role goes far beyond this aspect. Without this element, there is little motivation to develop new skills and gather more robust evidence.
As noted by the author,
[i]n social research, evidence does not speak for itself but is analyzed to infer unobservable characteristics such as preferences, identities, beliefs, rationality, power, strategic intent, and causality. For readers to understand and engage with research, they must be able to assess how the author purports to conceptualize and measure behavior, draw descriptive and causal inferences from those measures, determine that the results are conclusive vis-à-vis alternatives, and specify broader implications.”4
Moravcsik (2014)4 cites the APSA (American Political Science Association):
Evidence-based scholarly communities in the social sciences, natural sciences, and humanities can only exist if their members openly share evidence, results, and arguments. Transparency allows these communities to recognize when research has been conducted rigorously…”4
According to Harvey Fineberg, in Enhancing Scientific Reproducibility in Biomedical Research Through Transparent Reporting: Proceedings of a Workshop,1 then president of the Gordon and Betty Moore Foundation (National Academies, 2020), in the field of computational sciences, “[b]eing able to reproduce the computational results of another researcher starting with the same data and replicating a previous study to test its results facilitate the self-correcting nature of science, and are often cited as hallmarks of good science.” (National Academies, 2019).5 Grant, et al.6 (2022) link this growing attention to reporting, strengthened by increased “transparency, openness, and reproducibility”, to fundamental scientific ideals such as communality, universalism, disinterestedness, and organized skepticism, as proposed by Robert K. Merton (1910–2003) (1973).
These institutional imperatives have been increasingly challenged over time, as the research activity has become much more complex and intertwined with a research ecosystem that confronts aspects such as disinterestedness and organized skepticism (Hodson, 2011). As the same authors describe, open science practices allow researchers to better verify the work of their peers, which enhances science’s own self-correction and self-regulation.
Transparency and the “The Hidden Research Paper”
Regarding the publication system, collaborative networks naturally form among authors, editors, and reviewers, fostering a fundamental interaction for the advancement of science. This interaction involves a diverse group of individuals who have varying experiences, motivations, personalities, and often distinct cultural norms related to their specific research areas or subfields.
While these particularities enrich the process of communicating science, they can also lead to disagreements about results, their interpretations, the relevance claimed by authors, as well as the conclusions of the studies.
But “what happens when scientists disagree?” Over two decades ago, Richard Horton, editor of one of the most prestigious medical journals, The Lancet, posed this question in The Hidden Research Paper.7 His answer was that “[m]ost times, readers of research papers never know.”
For a deeper exploration of these questions, Horton (2002)7 conducted a qualitative study based on a set of ten articles published in The Lancet. Among the problems identified through the research, the study shed light on how opinions expressed in a research article may not necessarily reflect the views of all its contributors.
In addition to criticizing the superficial presentation of results in the articles analyzed without a clear commitment to previous findings, Horton (2002) also discussed broader systemic issues that affected (and continue to affect) the publication of scientific research. In his evaluation of the set of articles, he noted that “the diversity of contributor opinion was commonly excluded from the published report. I found that discussion sections were haphazardly organized and did not deal systematically with important questions about the study.”7
The idea of the “hidden research paper” is reflected in the observation that “a research paper rarely represents the full range of opinions of those scientists whose work it claims to report…”7 and that his study revealed “evidence of (self)censored criticism; obscured views about the meaning of research findings; incomplete, confused, and sometimes biased assessment of the implications of a study; and frequent failure to indicate directions for future research.”7
In his reflection, Horton (2002) adds that:
the paper is designed to persuade or at least convey to the reader a particular point of view. When one probes beneath the surface of the published report, one will find a hidden research paper that reveals the true diversity of opinion among contributors about the meaning of their research findings.7
A rhetorical question that arises is what direction a publication ultimately takes when different viewpoints, which should enrich the reported information, emerge but are not always incorporated into the final report (when they could or should be)?
Transparency in reporting: a challenge that deserves increased attention
In Horton’s (2002)7 research, he noted that diverging opinions and other relevant information were generally excluded from publication. The article takes us behind the scenes of the scientific research world, highlighting complex debates that remain backstage and are not reflected in the report. It also raises his question of “who[m] determines what is written, and why?”
In discussing a disagreement that emerged during the reporting of the Italian Multicentre Acute Stroke Trial, Horton (2002)7 noted that peer review revealed that two committee members had diverged from their colleagues in interpreting the clinical results. He observed a form of self-censorship on their part, which he interpreted as being justified by a higher good: the preservation of collaboration.
This raises a set of important questions: Is the primary goal of research merely to publish? To avoid discomfort? To present an artificial consensus? Or is it to bring to light the full range of thoughts, perspectives, and the richness and plurality of interpretations held by the researchers involved?
In practice, it is rather challenging to discern whether published conclusions accurately reflect genuine consensus or whether divergent interpretations were omitted or even censored. However, this ambiguity is not necessarily a failure, nor does it inherently compromise the validity of the findings described in a research paper.
The fact is that determining the best way to navigate disagreements among authors/contributors to enhance the transparency and robustness of research reporting remains a complex and ongoing challenge. This is an evolving process of institutional learning and refinement shaped, in part, by researchers’ ongoing efforts to enhance their communicative practices, both among peers and with broader publics. (OECD, 2023).
Scientific reporting and qualitative research: analysis and presentation of results
Horton’s metaphor of “the hidden research paper”7 (2002) invites us to extend our reading into different approaches os scientific research and bring a reflection on some of the specificities of qualitative research. In Transparency in Qualitative Research,8 Moravcsik (2019) discusses transparency in qualitative research in contrast to quantitative research.
In quantitative research, transparency and reproducibility are often highlighted as essential, reflecting the rigor achieved through methodological clarity and explicit data processing and interpretation, which are elements that enable studies to be replicated by other researchers.
In the qualitative context, however, rigor is not limited to detailed descriptions of procedures. It also involves reflexivity, i.e. a critical self-analysis by researchers of how their own perspectives and methodological decisions influence the research process and reporting (Olmos-Veja, et al., 2022).
In this regard, Moravcsik (2019) clarifies that,
[s]ocial scientists who do statistical and experimental work customarily achieve data transparency primarily by archiving a data set in digital repositories… it might seem that qualitative scholars should do likewise… Yet, while data archiving is often useful, it is dangerous to conclude that it can serve (as in statistical work) as a viable baseline transparency strategy for qualitative scholars. This faulty analogy to quantitative research has helped to engender much misplaced criticism of qualitative transparency. Despite its essential role in a small percentage of cases, four disadvantages have led scholars to conclude that archiving data is at best a partial solution appropriate to a thin subset of qualitative research”…”8
Thus, data archiving can be an effective tool for transparency only in a limited number of cases where there are no major restrictions on research participants and intellectual property, the underlying body of data (e.g., a discrete set of interviews, field notes, or documents) is clearly defined and logistically manageable, and analytical and process transparency is also provided.
Moravcsik’s article (2019)8 offers a critical analysis of the issue of transparency in qualitative research. He analyzes the specific challenges faced by a heterogeneous community of qualitative researchers in the social sciences and draws attention to some sensitive points.
As he describes in his analysis,
simply publicizing ‘all’ the data the author consulted is often neither feasible nor desirable. Strict limits exist on the evidence scholars may make transparent (and readers can process) in an ethical, legal, and logistically manageable manner. Many researchers undertake an ethical duty to protect vulnerable research subjects who appear in interviews, details of field notes, and documents either by keeping them anonymous or by keeping the source material entirely confidential. Everyone involved in discussions of qualitative transparency agree that such imperatives take precedence over transparency.8
In fact, excessive emphasis on transparency in certain social or behavioral studies can compromise the privacy of research participants or lead to oversimplification of complex research processes. According to Steltenpohl et al in Rethinking Transparency and Rigor from a Qualitative Open Science Perspective9 (2023), regarding public sharing of qualitative research data, there must be a trade-off between transparency and privacy. The authors draw attention to the role of “graded sharing access to the data,” using the example of data management and usage in the Qualitative Data Repository (QDR), which offers different access categories and imposes restrictions in some cases. As the authors describe,
[s]etting gradations for how much access researchers have to data could allow researchers to meet open science requirements without unnecessarily compromising participant anonymity, an important balance to strike when working with sensitive data and with organizations that are protective of their data (e.g., governmental agencies, industry partners).”9
In line with this caution, Moravcsik (2019)8 argues that balancing transparency with other ethical and methodological considerations when conducting qualitative research is among the challenges. As Kapiszewski and Karcher (2020) state in Transparency in Practice in Qualitative Research10
“Transparency” is not an all-or-nothing prospect: most work is neither left completely opaque nor made completely transparent but rather falls somewhere in between. Indeed, it is important to remember that “transparency” is a means to an end, not an end in itself.10
Reporting Qualitative Research and the Social Desirability Bias
Brenner and DeLamater (2016), in Lies, Damned Lies, and Survey Self-Reports? Identity as a Cause of Measurement Bias11, underscore the importance of addressing social desirability bias, a tendency for individuals to report attitudes or behaviors that misalign with their actual experiences, with their self-reports shaped by what they perceive to be socially acceptable or expected in their environment. The authors reinforce the idea that when designing survey instruments and methodologies, it is crucial to include strategies that help mitigate this bias.
Social desirability is often attributed to the method used to collect responses, such as interviewer-administered surveys versus anonymous web-based formats. Although this attribution is partially warranted by several studies (Rickwood & Coleman-Rose, 2023), Brenner and DeLamater (2016)11 challenge the assumption that response mode is the primary driver of social desirability. The authors draw on social identity theory to explain how individuals adopt the norms of the social groups to which they belong, including its informal rules, and their influence in shaping the way individuals (respondents in surveys, for example) may self-present in a way to align with perceived group norms (Raphbone, et al., 2023).
These identities influence not only what is over or underreported reported but also how respondents understand the questions. That said, researchers should carefully examine whether the design of instruments or interview approaches may inadvertently prime research participants to respond in ways that align with perceived normative behavior. However, many questions remain regarding how to address and reduce this type of bias (Brenner and DeLamater, 201611).
Broadly speaking, we can consider social desirability to be an intrinsic challenge in qualitative research. This phenomenon frequently appears in interactions between researchers and partipants, potentially influencing how information is shared. The subjective nature of qualitative research, which seeks to deeply understand individuals’ experiences and perceptions, makes social desirability a source of bias to be considered in writing and analyzing research reports.
Additionally, it is essential to recognize that the same constructs may be interpreted differently across various social contexts, presenting challenges in the attempt to compare or replicate qualitative results. In qualitative research, where the goal is usually to achieve a nuanced, contextualized understanding, interviews are the most commonly used method for data collection (Bastos, et al., 2021 ; Barros & Vasconcelos, 2024). Thus researchers must take extra care to anticipate, minimize, and report the effects of social desirability bias appropriately.
Yet, when conducting research involving human participants, it is crucial to consider potential biases not only from participants but also from the researchers themselves. Both can shape data generation and interpretation and should be treated as key sources of reflexive analysis (Olmos-Vega, et al., 2022).
An Evolving Culture: The Importance to Address Scientific Reporting among Graduate Students
The scenario outlined in this brief communication sheds light on ongoing transformations in the evaluation of research quality and reliability. It also suggests that there is still a long way to go regarding transparency and mitigating bias in scientific reporting, particularly given the complexity of the research culture and its reflection in publication practices across various fields.
Greater space for critique and self-critique in scientific collaborations regarding the rigor in the preparation of research reporting should be encouraged throughout all stages of academic training. The previously cited document by the National Academies2 (National Academies of Sciences, Engineering and Medicine, 2020) in this argument, emphasizes the need for a more complete research report.
Research and publication cultures do not change abruptly (Casci & Adams, 2019; Canti, et al., 2021), and there is a shared responsibility within academia to promote greater transparency in research reporting. Exercising this shared responsibility is part of that transformation. Adherence to editorial guidelines and policies is only one of the elements that can strengthen the transparency and reliability of scientific reporting.
Aspects of the research culture that discourage the disclosure of errors or research weaknesses—including reluctance to reveal limitations of any nature, are a source of concern in this context and deserve special attention from both early-career and experienced researchers.
In Promoting an Open Research Culture,12 Nosek, et al. (2015) echo studies and initiatives on scientific reproducibility (Amaral & Neves, 2021), arguing that “[t]ransparency, openness, and reproducibility are readily recognized as vital features of science,”12 and that “when asked, most scientists endorse these features as norms and values of their fields.”12 In this regard, Nosek, et al.12 (2015) suggest that this appreciation should be routinely reflected in scientific practices but that “mounting evidence suggests otherwise.”12
Revisiting Horton’s7 concerns, “[w]ho determines what is written, and why?” could not be more relevant today, in a time of substantial intellectual production supported by Generative Artificial Intelligence (Gen AI). We believe that newly admitted researchers, particularly graduate students like ourselves (the first three authors), should be encouraged to engage more in discussions about authorship responsibility in scientific reporting, especially in light of the increasing demand for greater transparency.
Scientific Communication and/or Research Methodology courses, for example, offer an opportune space for this kind of presentation, such as stories. Considering strategies to enhance the transparency of scientific reporting prompts us to think critically about the research itself, even before submitting the work for evaluation in the scientific publication system.
The importance of this exercise grows in an era where Gen AI is increasingly integrated into research activities and can automate or influence report writing and generate conclusions that the authors may not have previously considered. This practice should strengthen human agency and oversight to uphold the research integrity (Vasconcelos & Marušić, 2025) of reports produced in graduate programs.
Ensuring critical review and analysis (Lee, H., et al., 2025) remains a key component in the process of knowledge production and validation, which is undergoing a profound and challenging reconfiguration of the processes of authorial construction in research reports in a wide range of areas.
Notes
1. NATIONAL ACADEMIES OF SCIENCES, ENGINEERING, AND MEDICINE. Enhancing Scientific Reproducibility in Biomedical Research Through Transparent Reporting: Proceedings of a Workshop. Washington, DC: The National Academies Press. Available from: https://doi.org/10.17226/25627 ↩
2. FEYNMAN, R.P. Cargo cult science. Commencement address, California Institute of Technology. 1974. ↩
3. ALTMAN, D.G. and MOHER, D. Importance of transparent reporting of health research. In: MOHER, D., et al. (ed.) Guidelines for reporting health research: A user’s manual. Hoboken: John Wiley & Sons, 2014. ↩
4. MORAVCSIK, A. The Transparency Revolution in Qualitative Social Science: Implications for Policy Analysis. In: WIDNER, J., WOOLCOCK, M. and ORTEGA NIETO, D. (eds.) The Case for Case Studies: Methods and Applications in International Development. London: SAGE Publications, 2019. https://doi.org/10.1017/9781108688253.009. Available from: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/3BAEF3DBD76BF8308A49560AD519F2C2/9781108427272c8_176-192.pdf/the-transparency-revolution-in-qualitative-social-science.pdf ↩
5. New Report Examines Reproducibility and Replicability in Science, Recommends Ways to Improve Transparency and Rigor in Research [online]. National Academies of Sciences, Engineering, and Medicine. 2019 [viewed 04 September 2025]. Available from: https://www.nationalacademies.org/news/2019/05/new-report-examines-reproducibility-and-replicability-in-science-recommends-ways-to-improve-transparency-and-rigor-in-research ↩
6. GRANT, S., et al. Transparent, Open, and Reproducible Prevention Science. Prev Sci [online]. 2022, vol. 23, pp. 701–722 [viewed 04 September 2025]. https://doi.org/10.1007/s11121-022-01336-w. Available from: https://link.springer.com/article/10.1007/s11121-022-01336-w ↩
7. HORTON, R. The Hidden Research Paper. JAMA [online]. 2002, vol. 287, no. 21, pp. 2775–2778 [viewed 04 September 2025]. http://doi.org/10.1001/jama.287.21.2775. Available from: https://jamanetwork.com/journals/jama/fullarticle/194969 ↩
8. MORAVCSIK, A. Transparency in qualitative research. London: SAGE Publications, 2019. http://dx.doi.org/10.4135/9781526421036. Available from: https://www.princeton.edu/~amoravcs/library/TransparencyinQualitativeResearch.pdf ↩
9. STELTENPOHL, C.N., et al. Rethinking Transparency and Rigor from a Qualitative Open Science Perspective. Journal of Trial and Error [online]. 2023, vol. 4, no. 1 [viewed 04 September 2025]. https://doi.org/10.36850/mr7. Available from: https://journal.trialanderror.org/pub/rethinking-transparency/release/1 ↩
10. KAPISZEWSKI, D. and KARCHER, S. Transparency in Practice in Qualitative Research. PS: Political Science & Politics [online]. 2020, vol. 54, no. 2, pp. 285–291 [viewed 04 September 2025]. https://doi.org/10.1017/S1049096520000955. Available from: https://www.cambridge.org/core/journals/ps-political-science-and-politics/article/transparency-in-practice-in-qualitative-research/8B780E06FBF7F0837F39B2FD33900DD1 ↩
11. BRENNER, P.S. and DELAMATER, J. Lies, damned lies, and survey self-reports? Identity as a cause of measurement bias. Social Psychology Quarterly [online]. 2016, vol. 79, no. 4, pp. 333–354 [viewed 04 September 2025]. https://doi.org/10.1177/0190272516628298. Available from: https://journals.sagepub.com/doi/10.1177/0190272516628298 ↩
12. NOSEK, B.A., et al. Promoting an open research culture. Science [online]. 2015, vol. 348, no. 6242, pp. 1422–1425 [viewed 04 September 2025]. https://doi.org/10.1126/science.aab2374. Available from: https://www.science.org/doi/10.1126/science.aab2374 ↩
References
ALTMAN, D.G. and MOHER, D. Importance of transparent reporting of health research. In: MOHER, D., et al. (ed.) Guidelines for reporting health research: A user’s manual. Hoboken: John Wiley & Sons ,2014.
AMARAL, O.B. and NEVES, K. Reproducibility: expect less of the scientific paper. Nature [online]. 2021, vol. 597, no. 7876, pp. 329-331 [viewed 04 September 2025]. https://doi.org/10.1038/d41586-021-02486-7. Available from: https://www.nature.com/articles/d41586-021-02486-7
BARROS, V.S. and VASCONCELOS, S.M.R. Uma análise sobre como se configuram rigor metodológico e confiabilidade na pesquisa qualitativa em teses e dissertações em áreas biomédicas e não-biomédicas no período 2008–2018. Práxis Educativa [online]. 2024, vol. 19, pp. 1 – 25 [viewed 04 September 2025]. https://doi.org/10.5212/PraxEduc.v.19.22593.014. Available from: https://revistas.uepg.br/index.php/praxiseducativa/article/view/22593
BASTOS, R.A., et al. The structure of qualitative studies: a bibliometric pattern of biomedical literature. Ciência & Saúde Coletiva [online]. 2021, vol. 26, no. 8, pp. 3199–3208 [viewed 04 September 2025]. https://doi.org/10.1590/1413-81232021268.12922020. Available from: https://www.scielo.br/j/csc/a/LY7mKPxfTWvFb3tnVZSmV6m
BRENNER, P.S. and DELAMATER, J. Lies, damned lies, and survey self-reports? Identity as a cause of measurement bias. Social Psychology Quarterly [online]. 2016, vol. 79, no. 4, pp. 333–354 [viewed 04 September 2025]. https://doi.org/10.1177/0190272516628298. Available from: https://journals.sagepub.com/doi/10.1177/0190272516628298
CANTI, L., et al. Research culture: Science from bench to society. Biology Open [online]. 2021, vol. 10, no. 8, bio058919 [viewed 04 September 2025]. https://doi.org/10.1242/bio.058919. Available from: https://journals.biologists.com/bio/article/10/8/bio058919/271797/Research-culture-science-from-bench-to-society
CASCI, T. and ADAMS, E. Re-imagining research culture. F1000Research [online]. 2019, vol. 8, 1788 [viewed 04 September 2025]. https://doi.org/10.7490/f1000research.1117526.1. Available from: https://f1000research.com/documents/8-1697
FEYNMAN, R.P. Cargo cult science. Commencement address, California Institute of Technology. 1974.
GRANT, S., et al. Transparent, Open, and Reproducible Prevention Science. Prev Sci [online]. 2022, vol. 23, pp. 701–722 [viewed 04 September 2025]. https://doi.org/10.1007/s11121-022-01336-w. Available from: https://link.springer.com/article/10.1007/s11121-022-01336-w
HODSON, D. Turning the spotlight on science. In: HODSON, D. Looking to the future. Rotterdam: SensePublishers, 2011. Available from: https://doi.org/10.1007/978-94-6091-472-0_4.
HORTON, R. The Hidden Research Paper. JAMA [online]. 2002, vol. 287, no. 21, pp. 2775–2778 [viewed 04 September 2025]. http://doi.org/10.1001/jama.287.21.2775. Available from: https://jamanetwork.com/journals/jama/fullarticle/194969
KAPISZEWSKI, D. and KARCHER, S. Transparency in Practice in Qualitative Research. PS: Political Science & Politics [online]. 2020, vol. 54, no. 2, pp. 285–291 [viewed 04 September 2025]. https://doi.org/10.1017/S1049096520000955. Available from: https://www.cambridge.org/core/journals/ps-political-science-and-politics/article/transparency-in-practice-in-qualitative-research/8B780E06FBF7F0837F39B2FD33900DD1
LEE, H.P., et al. The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers [online]. Microsoft. 2025 [viewed 04 September 2025]. Available from: https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf
MORAVCSIK, A. The Transparency Revolution in Qualitative Social Science: Implications for Policy Analysis. In: WIDNER, J., WOOLCOCK, M. and ORTEGA NIETO, D. (eds.) The Case for Case Studies: Methods and Applications in International Development. London: SAGE Publications, 2019. https://doi.org/10.1017/9781108688253.009. Available from: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/3BAEF3DBD76BF8308A49560AD519F2C2/9781108427272c8_176-192.pdf/the-transparency-revolution-in-qualitative-social-science.pdf
MORAVCSIK, A. Transparency in qualitative research. London: SAGE Publications, 2019. http://dx.doi.org/10.4135/9781526421036. Available from: https://www.princeton.edu/~amoravcs/library/TransparencyinQualitativeResearch.pdf
NATIONAL ACADEMIES OF SCIENCES, ENGINEERING, AND MEDICINE. Enhancing Scientific Reproducibility in Biomedical Research Through Transparent Reporting: Proceedings of a Workshop. Washington, DC: The National Academies Press, 2020. Available from: https://doi.org/10.17226/25627
New Report Examines Reproducibility and Replicability in Science, Recommends Ways to Improve Transparency and Rigor in Research [online]. National Academies of Sciences, Engineering, and Medicine. 2019 [viewed 04 September 2025]. Available from: https://www.nationalacademies.org/news/2019/05/new-report-examines-reproducibility-and-replicability-in-science-recommends-ways-to-improve-transparency-and-rigor-in-research
NOSEK, B.A., et al. Promoting an open research culture. Science [online]. 2015, vol. 348, no. 6242, pp. 1422–1425 [viewed 04 September 2025]. https://doi.org/10.1126/science.aab2374. Available from: https://www.science.org/doi/10.1126/science.aab2374
OECD. Communicating science responsibly [online]. OECD.org. 2023 [viewed 04 September 2025]. Available from: https://www.oecd.org/content/dam/oecd/en/publications/reports/2023/10/communicating-science-responsibly_7c1d5a8a/5c3be7ce-en.pdf
OLMOS-VEGA, F. M., et al. A practical guide to reflexivity in qualitative research: AMEE Guide No. 149. Medical Teacher [online]. 2022, vol. 45, no. 3, pp. 241–251 [viewed 04 September 2025]. https://doi.org/10.1080/0142159X.2022.2057287. Available from: https://www.tandfonline.com/doi/full/10.1080/0142159X.2022.2057287
RATHBONE, J.A., et al. The reciprocal relationship between social identity and adherence to group norms. British Journal of Social Psychology [online]. 2023, vol. 62, no. 3, pp. 1346–1362 [viewed 04 September 2025]. https://doi.org/10.1111/bjso.12635. Available from: https://bpspsychub.onlinelibrary.wiley.com/doi/10.1111/bjso.12635
Reproducibility and Replicability in Science [online]. National Academies of Sciences, Engineering, and Medicine. 2019 [viewed 04 September 2025]. Available from: https://www.nationalacademies.org/our-work/reproducibility-and-replicability-in-science
RESNIK, B.D. Scientific research and the public trust. Science and Engineering Ethics [online]. 2011, vol. 17, no. 3, pp. 399–409 [viewed 04 September 2025]. https://doi.org/10.1007/s11948-010-9210-x. Available from: https://link.springer.com/article/10.1007/s11948-010-9210-x
RICKWOOD, D.J. and COLEMAN-ROSE, C.L. The effect of survey administration mode on youth mental health measures: Social desirability bias and sensitive questions. Heliyon [online]. 2023, vol. 9, no. 9, e20131 [viewed 04 September 2025]. https://doi.org/10.1016/j.heliyon.2023.e20131. Available from: https://www.cell.com/heliyon/fulltext/S2405-8440(23)07339-5
STELTENPOHL, C.N., et al. Rethinking Transparency and Rigor from a Qualitative Open Science Perspective. Journal of Trial and Error [online]. 2023, vol. 4, no. 1 [viewed 04 September 2025]. https://doi.org/10.36850/mr7. Available from: https://journal.trialanderror.org/pub/rethinking-transparency/release/1
STENECK, N.H. Fostering integrity in research: Definitions, current knowledge, and future directions. SCI ENG ETHICS [online]. 2006, vol. 12, pp. 53–74 [viewed 04 September 2025]. https://doi.org/10.1007/PL00022268. Available from: https://link.springer.com/article/10.1007/PL00022268
VANDENBROUCKE, J.P., et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and Elaboration. PLOS Medicine [online] 2007, vol. 4, no. 10, e297 [viewed 04 September 2025]. https://doi.org/10.1371/journal.pmed.0040297. Available from: https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0040297
VASCONCELOS, S. and MARUŠIĆ, A. Gen AI and Research Integrity: Where to now?: The Integration of Generative AI in the Research Process Challenges Well-Established Definitions of Research Integrity. EMBO Reports [online]. 2025, vol. 26, pp. 1923–1928 [viewed 04 September 2025]. https://doi.org/10.1038/s44319-025-00424-6. Available from: https://www.embopress.org/doi/full/10.1038/s44319-025-00424-6
Como citar este post [ISO 690/2010]:
Recent Comments