By Rafael Cardoso Sampaio

Image: Declan Sun via Unsplash.
The consolidation of generative artificial intelligence (AI), with tools such as ChatGPT, Gemini, Claude, DeepSeek, and the like is significantly transforming the production of academic knowledge. Tools that assist in writing, data analysis, and idea generation have become ubiquitous, promising to increase efficiency and democratize access to scientific output. However, this technological revolution poses a central ethical dilemma for researchers, forcing them to choose between following the consensus recommendation to declare the use of AI and risking their credibility, or omitting it and compromising scientific integrity. This paradox, in which transparency clashes with social costs, reveals tensions between ethics, perception, and innovation in academia.
Internationally, there is virtually a consensus on the need for transparency. Entities such as the International Committee of Medical Journal Editors (ICMJE) and the Committee on Publication Ethics (COPE), and publishers such as Cambridge, Elsevier, Oxford, and Taylor & Francis require authors to explicitly declare the use of generative AI. Sampaio, Sabbatini, and Limongi (2024, p. 20) in the Diretrizes para uso ético e responsável da Inteligência Artificial Generativa1 recommend that the declaration be made at the end of the paper, in the format “Declaration of AI and AI-assisted technologies in the writing process,” according to the model below.
During the preparation of this work, the author(s) used [name of tool, model, or service] version [number and/or date] to [justify the reason]. After using this tool/model/service, the author(s) reviewed and edited the content in accordance with the scientific method and assume full responsibility for the content of the publication.1
The reasons for this requirement are clear. As Hosseini, et al. (2023) point out, AIs have no moral agency or capacity to be held legally and morally responsible for their results, with responsibility falling entirely on human authors. Additionally, transparency is essential for reproducibility, a pillar of science, as readers and reviewers need to know all the methodological tools used to critically evaluate the results (Meinel, et al., 2025). Finally, the declaration prevents the attribution of undue credit, ensuring that human intellectual effort is properly recognized (Sampaio, Sabbatini, Limongi, 2024).
Despite this strong ethical consensus, adherence to the declaration is surprisingly low, exposing an uncomfortable paradox that being forthcoming comes at a high cost. The research The transparency dilemma: How AI disclosure erodes trust2 by Schilke and Reimann (2025) has shown that people who declare their use of AI are systematically perceived as less trustworthy. Through a robust series of thirteen experiments, the authors found that disclosure of AI use consistently erodes trust, regardless of the task or the audience.
As they state, “disclosure of AI use serves as a warning to recipients that the discloser’s work is not purely human-generated, which is likely to be viewed as illegitimate and, consequently, diminishes trust.”2 (Schilke and Reimann, 2025, p. 2, emphasis added).
The mechanism behind this effect is precisely the loss of legitimacy. Academia, particularly, operates based on institutional expectations about what constitutes “legitimate intellectual work,” often associated with effort, originality, and accumulated human knowledge. The AI disclosure statement breaks this expectation, raising doubts about the researcher’s competence and agency.
The study by Schilke and Reimann (2025)2 found that this negative effect persists even when the statement is mitigated with phrases such as “reviewed by a human” or “used only for grammatical correction.” The mere mention of AI is enough to trigger negative bias. This creates the heart of the paradox, in which the researcher who acts with integrity is penalized, while the one who omits the information is rewarded with a perception of greater credibility in comparison.
In Brazil, the discussion about the use and transparency of generative artificial intelligence in the scientific field is clearly in its infancy. The study Inteligência artificial e a escrita auxiliada por algoritmos: o que sinalizam as associações científicas e seus periódicos?3 conducted by Lopes, et al. (2024) analyzed 33 scientific associations from various fields and 50 of their journals, finding that none of the associations had an explicit position on the topic and that only three of the 50 journals (6%) had, as of June 2023, formal guidelines on the use of AI in scientific writing. When the topic is addressed, concerns focus on authorship, plagiarism, and ethics, but most national publications still operate in a regulatory vacuum.
Recently, an overview of national scientific production published in Um panorama das diretrizes relacionadas ao uso de inteligência artificial nos principais periódicos da Área Interdisciplinar da CAPES,4 conducted by Gomes and Mendes (2025), revealed that only 20.5% of the main journals in the CAPES Interdisciplinary Area mention the use of AI in their guidelines to authors. Sampaio, Sabbatini, and Limongi (2024) showed that, at the end of 2024, there were still no clear guidelines from the main institutions linked to science, such as Ministério da Ciência, Tecnologia e Inovação (MCTI), Ministério da Educação (MEC), Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes), Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), and even state funding agencies, such as Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), and Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (Faperj).
The same scenario is repeated in public universities themselves, where an exploratory study highlighted by The Conversation Brazil5 was able to find only seven institutions with any rules or guidelines on the use of AI in a universe of more than 150 verified institutions.
The lack of structured debate and clear guidelines means that the use of AI remains associated with stigmas of “stupidity,” “laziness,” or “cheating” in informal academic discourse. This negative perception creates an environment of fear, in which students and researchers fear the judgment of their peers, editors, and funding agencies, which inhibits disclosure and fosters a culture of secrecy. The result is a vicious cycle in which stigma prevents disclosure and the absence of public statements by reputable researchers reinforces the stigma. Consequently, AI is often employed clandestinely, without the necessary scrutiny to ensure its ethical and responsible use.
Besides cultural barriers, the implementation of transparency faces significant practical challenges. One challenge is the very definition of “substantial use” of the technology, which is still a matter of wide debate. There are still many uncertainties about what exactly needs to be disclosed. At this point, we accept the suggestions of Resnik and Hosseini in Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary?6 (2025, p. 6), who propose three relevant criteria for identifying substantial uses of AI tools in research and give some practical examples (see examples at the end of the text):
(a) The AI tool makes decisions that directly affect the research results. For example, using AI to extract data from articles to perform a systematic review would be substantial and intentional, as data extraction decisions affect the results of the review.
(b) The AI tool generates or synthesizes content, data, or images. For example, using AI to write sections of an article, integrate notes or other information, translate the language of the article, or create synthetic images or data would be substantial and intentional, as the AI generated or synthesized new content that directly affects the research results.
(c) The AI tool analyzes content, data, or images. For example, using AI to analyze genomic data, text, or radiological images would be substantial and intentional, as it produces analyses that support findings and conclusions and affect the content of a publication.6
However, overcoming this paradox requires coordinated action and cultural change. The solution is not to abandon transparency, but to build an environment in which declaring the ethical use of AI is seen as a sign of professionalism and responsibility rather than an act of courage.
An open debate that dissociates the use of AI from fraud and positions AI as a legitimate tool to support research should be promoted, with universities and graduate programs including training on its ethical and effective use in their curricula. The harmonization of editorial policies is also a key step, as is the development of clearer and more objective declaration criteria.
Finally, cultural change depends on the actions of academic leaders, such as senior researchers, productivity fellows, editors, and directors of funding agencies who, by openly declaring their responsible use of AI, can help normalize the practice and dismantle the stigma. In this sense, Schilke and Reimann (2025)2 suggest that academia should explicitly recognize the cost of transparency and actively work to reduce it, mainly through public education about the benefits of responsible AI use, normalizing it as an aid tool.
As Meinel, et al. in Transparency in Research Reporting: An Evolving Culture7 (2025) state, transparency is a pillar of scientific credibility. Without it, academia risks replicating biases, hiding flaws, and ultimately losing public trust. While institutions develop clearer and more harmonized policies, it is up to each researcher to reflect on their own use of AI and how to communicate it responsibly, as part of a genuine commitment to academic integrity (Tang, et al., 2023). Transparency, when supported by understanding and education, can ultimately become not a cost to be avoided, but a value to be cultivated.
It is important to note that the struggle for transparency in the use of AI is not an isolated phenomenon, but rather the latest chapter in an ongoing effort to improve the integrity of scientific reporting. Hesitation to declare the use of AI can be seen as the creation of a “hidden report” that excludes a key non-human collaborator, motivated by the same fear that revealing a less linear and purely human process could diminish the research strength. Therefore, the current debate on AI is part of this long tradition of improving reporting practices to make science more open, replicable, and reliable (Meinel, et al., 2025).
Below is a simple model presented by Resnik and Hosseini6 (2025, p. 7) on the main situations for declaring or not declaring the use of AI:
| Disclosure is mandatory when, for example, AI is used |
| ● To formulate questions or hypotheses, design and conduct experiments. |
| ● To write parts of the article, summarize, paraphrase, significantly revise, or synthesize textual conten. |
| ● To translate parts or all of the article. |
| ● To collect, analyze, interpret, or visualize data (quantitative or qualitative). |
| ● To extract data for literature review (systematic or otherwise) and identify knowledge gaps. |
| ● To generate synthetic data and images reported in the article or used in the research. |
| Disclosure is optional when, for example, AI is used |
| ● To edit existing text for grammar, spelling, or organization. |
| ● To find references or verify the relevance of references found by humans. |
| ● To find and generate examples for existing content. |
| ● To brainstorm and suggest ways to organize an article or the title of an article/section. |
| ● To validate and/or provide feedback on existing ideas, text, and code. |
| Disclosure is unnecessary when, for example, AI is used |
| ● To suggest words or phrases that improve the clarity/readability of an existing sentence. |
| ● As part of a larger operation in which AI is not generating or synthesizing content or making research decisions; for example, when AI is integrated with other systems/machines. |
| ● As a digital assistant, for example, to help organize and maintain digital assets and project workflows. |
Table 1. Disclosure of the use of AI in research and writing.
Statement on AI and AI-assisted technologies in the writing process
“While preparing this text, the author used Google’s Gemini 2.5 Pro and Z.ai’s GLM 4.5 in September 2025 for brainstorming, text evaluation, and grammatical and semantic improvements to the text. After using these tools, the author reviewed and edited the content in accordance with the scientific method and assumes full responsibility for the content of the publication”.
Notes
1. SAMPAIO, R.C., SABBATINI, M. and LIMONGI, R. Diretrizes para o uso ético e responsável da Inteligência Artificial Generativa: um guia prático para pesquisadores. São Paulo: Editora Intercom, 2024. Available from: https://www.portcom.intercom.org.br/ebooks/detalheEbook.php?id=57203 ↩
2. SCHILKE, O. and REIMANN, M. The transparency dilemma: How AI disclosure erodes trust. Organizational Behavior and Human Decision Processes [online]. 2025, vol. 188, 104405 [viewed 10 October 2025]. https://doi.org/10.1016/j.obhdp.2025.104405. Available from: https://www.sciencedirect.com/science/article/pii/S0749597825000172?via%3Dihub ↩
3. LOPES, C., et al. Artificial Intelligence and Writing Assisted by Algorithms: What Do Scientific Associations and Their Journals Signal? Cadernos De Educação Tecnologia e Sociedade [online]. 2024, vol. 17, no. 2, pp. 623–648 [viewed 10 October 2025]. https://doi.org/10.14571/brajets.v17.n2.623-648. Available from: https://brajets.com/brajets/article/view/1227 ↩
4. GOMES, R.A. and MENDES, T.A. Um panorama das diretrizes relacionadas ao uso de inteligência artificial nos principais periódicos da Área Interdisciplinar da CAPES. Encontros Bibli [online]. 2025, vol. 30, e103488 [viewed 10 October 2025]. https://doi.org/10.5007/1518-2924.2025.e103488. Available from: https://www.scielo.br/j/eb/a/SJk53dtyBBfVq583TJhtGTz ↩
5. SAMPAIO, R.C. Pesquisa feita nas principais universidades mostra que uso da IA segue desregulado no ensino superior brasileiro [online]. The Conversation. 2025 [viewed 10 October 2025]. Available from: https://theconversation.com/pesquisa-feita-nas-principais-universidades-mostra-que-uso-da-ia-segue-desregulado-no-ensino-superior-brasileiro-262838 ↩
6. RESNIK, D.B. and HOSSEINI, M. Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary? Accountability in Research [online]. 2025, pp. 1–13, 1545-5815 [viewed 10 October 2025]. https://doi.org/10.1080/08989621.2025.2481949. Available from: https://www.tandfonline.com/doi/full/10.1080/08989621.2025.2481949 ↩
7. MEINEL, R.D., et al. Transparency in Research Reporting: An Evolving Culture [online]. SciELO in Perspective, 2025 [viewed 09 October 2025]. Available from: https://blog.scielo.org/en/2025/09/04/transparency-in-research-reporting-an-evolving-culture/ ↩
References
CHO, W.I., CHO, E. and SHIN, H. Three Disclaimers for Safe Disclosure: A Cardwriter for Reporting the Use of Generative AI in Writing Process. arXiv [online]. 2024 [viewed 10 October 2025]. https://doi.org/10.48550/arXiv.2404.09041. Available from: https://arxiv.org/abs/2404.09041
GOMES, R.A. and MENDES, T.A. Um panorama das diretrizes relacionadas ao uso de inteligência artificial nos principais periódicos da Área Interdisciplinar da CAPES. Encontros Bibli [online]. 2025, vol. 30, e103488 [viewed 10 October 2025]. https://doi.org/10.5007/1518-2924.2025.e103488. Available from: https://www.scielo.br/j/eb/a/SJk53dtyBBfVq583TJhtGTz
HOSSEINI, M., RESNIK, D. B. and HOLMES, K. The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts. Research Ethics [online]. 2023, vol. 19, no. 4, pp. 449–465 [viewed 10 October 2025]. https://doi.org/10.1177/17470161231180449. Available from: https://sigmapubs.onlinelibrary.wiley.com/doi/10.1111/jnu.12938
LOPES, C., et al. Artificial Intelligence and Writing Assisted by Algorithms: What Do Scientific Associations and Their Journals Signal? Cadernos De Educação Tecnologia e Sociedade [online]. 2024, vol. 17, no. 2, pp. 623–648 [viewed 10 October 2025]. https://doi.org/10.14571/brajets.v17.n2.623-648. Available from: https://brajets.com/brajets/article/view/1227
MEINEL, R.D., et al. Transparency in Research Reporting: An Evolving Culture [online]. SciELO in Perspective, 2025 [viewed 09 October 2025]. Available from: https://blog.scielo.org/en/2025/09/04/transparency-in-research-reporting-an-evolving-culture/
RESNIK, D.B. and HOSSEINI, M. Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary? Accountability in Research [online]. 2025, pp. 1–13, 1545-5815 [viewed 10 October 2025]. https://doi.org/10.1080/08989621.2025.2481949. Available from: https://www.tandfonline.com/doi/full/10.1080/08989621.2025.2481949
SAMPAIO, R.C. Pesquisa feita nas principais universidades mostra que uso da IA segue desregulado no ensino superior brasileiro [online]. The Conversation. 2025 [viewed 10 October 2025]. Available from: https://theconversation.com/pesquisa-feita-nas-principais-universidades-mostra-que-uso-da-ia-segue-desregulado-no-ensino-superior-brasileiro-262838
SAMPAIO, R.C., SABBATINI, M. and LIMONGI, R. Diretrizes para o uso ético e responsável da Inteligência Artificial Generativa: um guia prático para pesquisadores. São Paulo: Editora Intercom, 2024. Available from: https://www.portcom.intercom.org.br/ebooks/detalheEbook.php?id=57203
SCHILKE, O. and REIMANN, M. The transparency dilemma: How AI disclosure erodes trust. Organizational Behavior and Human Decision Processes [online]. 2025, vol. 188, 104405 [viewed 10 October 2025]. https://doi.org/10.1016/j.obhdp.2025.104405. Available from: https://www.sciencedirect.com/science/article/pii/S0749597825000172?via%3Dihub
TANG, A. et al. The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing. Journal of Nursing Scholarship [online]. 2024, vol. 56, no. 2, pp. 314–318 [viewed 10 October 2025]. https://doi.org/10.1111/jnu.12938. Available from: https://sigmapubs.onlinelibrary.wiley.com/doi/10.1111/jnu.12938
About Rafael Cardoso Sampaio
Rafael Cardoso Sampaio is a permanent professor in the Political Science Graduate Program at the Universidade Federal do Paraná. He is a CNPq productivity fellow. He is one of the authors, along with Marcelo Sabbatini and Ricardo Limongi, of the book “Diretrizes para o uso ético e responsável da inteligência artificial generativa: um guia prático para pesquisadores” published by Intercom, and is the author, along with Dalson Figueiredo, of the introductory guide “Prompts (Infalíveis!) para Pesquisa Acadêmica com Inteligência Artificial” published by Edufpi.
Translated from the original in Portuguese by Lilian Nassi-Calò.
Como citar este post [ISO 690/2010]:

![How to translate academic writing into podcasts using generative AI [Originally published in the LSE Impact blog in June/2024] Image of a work of art made up of several lilac letters in a formation that looks like a cloud, generated by Google DeepMind](https://blog.scielo.org/en/wp-content/uploads/sites/2/2024/06/Text-To-Speech-LSE-Impact_thumb.png)












Recent Comments