By Ricardo Limongi França Coelho and Luis Carlos Coelho
Introduction

Imagem:
Liah Martens by Unsplash.
The emergence of generative artificial intelligence in scientific output has sparked intense debates about integrity, transparency, and, above all, authorship. Recent studies indicate that the use of language models in scientific articles has grown exponentially: analyses of large corpora have identified evidence of AI modification in up to 22.5% of computer science abstracts. High-impact scientific journals, publishers, and publication ethics organizations have taken a relatively consensual position: AI systems cannot be listed as authors. However, beyond normative guidelines, a fundamental philosophical question remains: what is the nature of the relationship between researchers and AI in the knowledge production process?
This post proposes that the Socratic maieutic perspective, the method of “giving birth to ideas”,offers a fruitful conceptual framework for rethinking this relationship. More than a tool for extracting information, AI can be understood as a dialogical interlocutor that helps researchers articulate knowledge that is already gestating within them.
Maieutics as a metaphor for knowledge production
The term “maieutics” derives from the Greek maieutikós, which means “the art of midwifery”. Socrates used this metaphor deliberately: just as his mother, Phaenarete, was a midwife and helped women give birth to children, he saw himself as a midwife of ideas, helping his interlocutors give birth to knowledge they already carried within themselves but had not yet articulated or become conscious of.
The method works essentially through successive questions. Instead of transmitting knowledge directly, Socrates questioned his interlocutors in order to make them examine their own beliefs, identify contradictions in their reasoning and gradually arrive at more refined understandings. The underlying assumption is that genuine knowledge cannot simply be deposited in someone from outside but must emerge from an active process of reflection and personal discovery.
This perspective has profound implications for the discussion of AI and scientific authorship. If genuine knowledge emerges from a dialogical process of questioning and reflection, then the central question is not whether AI participated in the text’s production, but how it participated and, fundamentally, who remains responsible for the knowledge produced.
From extraction to reflection: repositioning the researcher-AI relationship
The predominant use of generative AI systems in academia has followed an instrumental logic: AI is treated as an oracle that provides ready-made answers — texts, abstracts, analyses, codes. This approach, which we could call “extractivist”, positions the researcher as a passive consumer of machine-generated outputs. As Birhane et al. (2023) warn in the article Science in the age of large language models1, published in Nature Reviews Physics, large-scale language models introduce significant risks to science, including the possibility of erosion of critical thinking and originality.
The maieutic perspective suggests a radical repositioning of this relationship. Instead of extracting answers, researchers can use AI as a dialogical partner that helps them clarify, examine, and refine ideas in development. The iterative process of prompting thus becomes a contemporary form of Socratic dialogue: each AI response is not an end point, but a stimulus for further reflection.
From this perspective, AI takes on the role of a “midwife”: it asks questions, offers alternative perspectives, identifies gaps in reasoning, and challenges assumptions. But it is still the researcher who “delivers” the knowledge. Epistemic responsibility, the ability and obligation to account for the knowledge produced remains non-transferable. As Lloyd (2025) argues in Epistemic responsibility: toward a community standard for human-AI collaborations2, it is necessary to develop community standards for human-AI collaborations that preserve this epistemic responsibility.
Authorship and epistemic responsibility: the emerging consensus
The leading organizations on ethics in scientific publishing converge on one fundamental point: AI systems cannot be authors because they cannot take responsibility. The COPE Council (2023)3 has established in its official position that AI tools do not meet the requirements for authorship because they “cannot be held accountable for the work” nor “manage issues related to accuracy, integrity, and originality.” Similarly, Hosseini, Rasmussen e Resnik (2023)4 argue in Using AI to write scholarly publications, that natural language processing systems cannot be considered authors because they lack moral agency.
The journal Science was categorical in stating, through its editor-in-chief, that “AI-generated text is (…) analogous to plagiarism” (Thorp, 2023, p. 313)5. Nature established its own guidelines, emphasizing that “large-scale language models (LLMs), such as ChatGPT, do not meet our criteria for authorship” (Nature Editorial, 2023, p. 612)6. These positions reflect a deep understanding of the nature of scientific authorship: being an author is not just about producing text, but about taking epistemic responsibility for the knowledge communicated.
A bibliometric analysis conducted by Ganjavi et al. (2024), published by the BMJ, in the article Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis7,examined the guidelines of the 100 leading academic publishers and the 100 highest-ranked scientific journals. The results revealed that only 24% of publishers had specific guidelines on generative AI, but among those that did, 96-98% prohibited the attribution of authorship to AI systems. This highlights an emerging consensus that links authorship to responsibility — something that machines, by definition, cannot assume.
Implications for editorial policies: from prohibition to qualified transparency
The maieutic perspective offers a more nuanced framework for editorial policies than the simple permission/prohibition dichotomy. What matters is not only whether AI was used, but also the nature of that use and, fundamentally, whether the researcher maintained their position as a responsible epistemic agent throughout the process.
An analysis of the guidelines of major publishers reveals important nuances. Elsevier distinguishes between acceptable uses (language refinement, analysis support) and prohibited uses (generation of scientific images, production of text without critical supervision). Springer Nature emphasizes that any use must be documented in the methods section. As documented by Resnik e Hosseni(2024) in The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool8, there is an urgent need for new ethical guidelines for the use of AI in scientific research, including recommendations ranging from a clear definition of the role of AI to mechanisms for verifying generated content.
In the Brazilian context, Sampaio, Sabbatini e Limongi (2025) proposed in Brazilian researchers launch Guidelines for the ethical and responsible use of Generative Artificial Intelligence (GenAI)9 proposed specific guidelines for the ethical and responsible use of generative AI in academic research, emphasizing principles of transparency, responsibility, and integrity. As highlighted by Vasconcelos e Marušić (2025) in the post Research Integrity and Human Agency in Research Intertwined with Generative AI10, published on the SciELO in Perspective Blog, the central issue is to preserve human agency in research intertwined with generative AI. These distinctions point to a common principle: transparency should not be an end in itself, but a means of assessing whether epistemic responsibility has been adequately exercised.
Training researchers: teaching them to engage in dialogue, not to extract
The maieutic perspective has direct implications for training researchers in AI-related skills—what the literature has termed AI literacy. In a seminal work, What is AI Literacy? Competencies and Design Considerations 11,Long e Magerko (2020) defined AI literacy as “a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool” (p. 2). The authors identified 17 essential competencies, organized into dimensions ranging from technical understanding to ethical evaluation.
More than teaching how to use AI tools efficiently, it is about developing the ability to engage critically with these systems. Smith et al. (2024)12 proposed “ten simple rules for using large-scale language models in science,” published in PLoS Computational Biology, which include principles such as maintaining healthy skepticism, verifying outputs, documenting usage, and understanding the limitations of models.
This involves specific skills: formulating questions that stimulate reflection rather than just requesting answers; critically evaluating the outputs generated; identifying biases, gaps, and inconsistencies; and, fundamentally, maintaining awareness of one’s own position as a responsible epistemic agent. These skills echo the Socratic tradition: the proper use of AI, like good philosophical dialogue, requires a willingness to question, including oneself.
Graduate programs and training in scientific methodology need to incorporate these dimensions. It is not a question of prohibiting the use of AI or encouraging it uncritically, but of training researchers to use these technologies in a way that expands, rather than replaces, their intellectual agency.
Concluding remarks: AI as a midwife, not an oracle
The maieutic metaphor offers a conceptual framework that transcends the simplistic dichotomies of the current debate on AI in science. Neither a neutral tool to be used without restrictions nor a threat to be banned, generative AI can be understood as a dialogical interlocutor: a contemporary “midwife” that helps researchers give birth to knowledge they already carry within them.
This perspective does not resolve all tensions, but questions remain about algorithmic bias, unequal access, environmental impacts, and the risks of homogenizing scientific thought. However, it repositions the discussion about authorship on a more solid footing: the central issue is not the participation of AI, but the maintenance of the researcher’s epistemic responsibility.
For scholarly communication community, such as editors, reviewers, journal managers, the challenge is to develop mechanisms that evaluate not only transparency in the use of AI, but also the quality of the dialogical process that this use represents. For researchers, the challenge is to learn to use AI as Socrates used dialogue: not to obtain ready-made answers, but to refine their own thinking.
After all, in maieutics, the midwife is not the protagonist of birth.That role belongs to the person giving birth. In AI-mediated scientific output, genuine knowledge continues to be born from the researcher. AI can assist in the birth, but the responsibility for what is born remains, inescapably, human.
Notas
1.BIRHANE, A., et al. Science in the age of large language models. Nature Reviews Physics [online]. 2023, vol. 5, pp. 277-280.[viewed 21 January 2026] https://doi.org/10.1038/s42254-023-00581-4. Available from: https://www.nature.com/articles/s42254-023-00581-4 ↩
2.LLOYD, D. Epistemic responsibility: toward a community standard for human-AI collaborations. Frontiers in Artificial Intelligence [online]. 2025, vol. 8, 1635691. [viewed 21 January 2026]. https://doi.org/10.3389/frai.2025.1635691. Available from: https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1635691/full ↩
3.COPE COUNCIL. COPE Authorship and AI tools [online]. Committee on Publication Ethics, 2023. [viewed 21 January 2026]. https://doi.org/10.24318/cCVRZBms. Available from: https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools ↩
4.HOSSEINI, M., RASMUSSEN, L. M., e RESNIK, D. B. Using AI to write scholarly publications. Accountability in Research [online]. 2023, vol. 31, no. 7, pp. 715-723. [viewed 21 January 2026]. https://doi.org/10.1080/08989621.2023.2168535. Available from: https://www.tandfonline.com/doi/full/10.1080/08989621.2023.2168535 ↩
5.THORP, H. H. ChatGPT is fun, but not an author. Science [online]. 2023, vol. 379, no. 6630, p. 313. [viewed 21 January 2026]. https://doi.org/10.1126/science.adg7879. Available from: https://www.science.org/doi/10.1126/science.adg7879 ↩
6.Nature Editorial. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature [online]. 2023, vol. 613, p. 612. [viewed 21 January 2026]. https://doi.org/10.1038/d41586-023-00191-1. Available from: https://www.nature.com/articles/d41586-023-00191-1 ↩
7.GANJAVI, C., et al. Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. The BMJ [online]. 2024, vol. 384, e077192. [viewed 21 January 2026]. https://doi.org/10.1136/bmj-2023-077192. Available from: https://www.bmj.com/content/384/bmj-2023-077192 ↩
8.RESNIK, D. B., e HOSSEINI, M. The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. AI Ethics [online]. 2024, vol. 5, no. 2, pp. 1499-1521. [viewed 21 January 2026]. https://doi.org/10.1007/s43681-024-00493-8. Available from: https://link.springer.com/article/10.1007/s43681-024-00493-8 ↩
9.SAMPAIO, R. C., SABBATINI, M., e LIMONGI, R. Pesquisadores brasileiros lançam diretrizes para o uso ético e responsável da Inteligência Artificial Generativa (IAG) [online]. SciELO em Perspectiva, 2025. [viewed 21 January 2026]. Available from: https://blog.scielo.org/blog/2025/02/05/pesquisadores-lancam-diretrizes-iag/ ↩
10.VASCONCELOS, S., e MARUŠIĆ, A. Integridade científica e agência humana na pesquisa entremeada por IA Generativa [online]. SciELO em Perspectiva. 2025 [viewed 21 January 2026] Available from: https://blog.scielo.org/blog/2025/05/07/integridade-cientifica-e-agencia-humana-na-pesquisa-ia-gen/ ↩
11.LONG, D., e MAGERKO, B. What is AI Literacy? Competencies and Design Considerations. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. New York:, 2020 [viewed 21 January 2026]. https://doi.org/10.1145/3313831.3376727. Available from: https://dl.acm.org/doi/10.1145/3313831.3376727 ↩
12.SMITH, G. R., et al. Ten simple rules for using large language models in science, version 1.0. PLoS Computational Biology [online]. 2024, vol. 20, n.º 1, e1011767. [viewed 21 January 2026]. https://doi.org/10.1371/journal.pcbi.1011767. Available from: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1011767 ↩
Referências
BIRHANE, A., et al. Science in the age of large language models. Nature Reviews Physics [online]. 2023, vol. 5, pp. 277-280.[viewed 21 January 2026] https://doi.org/10.1038/s42254-023-00581-4. Available from: https://www.nature.com/articles/s42254-023-00581-4
COPE COUNCIL. COPE Authorship and AI tools [online]. Committee on Publication Ethics, 2023. [viewed 21 January 2026]. https://doi.org/10.24318/cCVRZBms. Available from: https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools
GANJAVI, C., et al. Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. The BMJ [online]. 2024, vol. 384, e077192. [viewed 21 January 2026]. https://doi.org/10.1136/bmj-2023-077192. Available from: https://www.bmj.com/content/384/bmj-2023-077192
HOSSEINI, M., RASMUSSEN, L. M., e RESNIK, D. B. Using AI to write scholarly publications. Accountability in Research [online]. 2023, vol. 31, no. 7, pp. 715-723. [viewed 21 January 2026]. https://doi.org/10.1080/08989621.2023.2168535. Available from: https://www.tandfonline.com/doi/full/10.1080/08989621.2023.2168535
LIANG, W., et al. Quantifying large language model usage in scientific papers. Nature Human Behaviour [online]. [viewed 21 January 2026]. https://doi.org/10.1038/s41562-025-02273-8. Available from: https://www.nature.com/articles/s41562-025-02273-8
LLOYD, D. Epistemic responsibility: toward a community standard for human-AI collaborations. Frontiers in Artificial Intelligence [online]. 2025, vol. 8, 1635691. [viewed 21 January 2026]. https://doi.org/10.3389/frai.2025.1635691. Available from: https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1635691/full
LONG, D., e MAGERKO, B. What is AI Literacy? Competencies and Design Considerations. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. New York:, 2020 [viewed 21 January 2026]. https://doi.org/10.1145/3313831.3376727. Available from: https://dl.acm.org/doi/10.1145/3313831.3376727
Nature Editorial. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature [online]. 2023, vol. 613, p. 612. [viewed 21 January 2026]. https://doi.org/10.1038/d41586-023-00191-1. Available from: https://www.nature.com/articles/d41586-023-00191-1
RESNIK, D. B., e HOSSEINI, M. The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. AI Ethics [online]. 2024, vol. 5, no. 2, pp. 1499-1521. [viewed 21 January 2026]. https://doi.org/10.1007/s43681-024-00493-8. Available from: https://link.springer.com/article/10.1007/s43681-024-00493-8
SAMPAIO, R. C., SABBATINI, M., e LIMONGI, R. Pesquisadores brasileiros lançam diretrizes para o uso ético e responsável da Inteligência Artificial Generativa (IAG) [online]. SciELO em Perspectiva, 2025. [viewed 21 January 2026]. Available from: https://blog.scielo.org/blog/2025/02/05/pesquisadores-lancam-diretrizes-iag/
SMITH, G. R., et al. Ten simple rules for using large language models in science, version 1.0. PLoS Computational Biology [online]. 2024, vol. 20, n.º 1, e1011767. [viewed 21 January 2026]. https://doi.org/10.1371/journal.pcbi.1011767. Available from: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1011767
THORP, H. H. ChatGPT is fun, but not an author. Science [online]. 2023, vol. 379, no. 6630, p. 313. [viewed 21 January 2026]. https://doi.org/10.1126/science.adg7879. Available from: https://www.science.org/doi/10.1126/science.adg7879
VASCONCELOS, S., e MARUŠIĆ, A. Integridade científica e agência humana na pesquisa entremeada por IA Generativa [online]. SciELO em Perspectiva. 2025 [viewed 21 January 2026] Available from: https://blog.scielo.org/blog/2025/05/07/integridade-cientifica-e-agencia-humana-na-pesquisa-ia-gen/
About Ricardo Limongi França Coelho
Professor of Marketing and Artificial Intelligence, Universidade Federal de Goiás (UFG), Goiânia–GO,and Editor-in-Chief of Brazilian
Administration Review (BAR) da ANPAD journal, DT-CNPq Scholarship Recipient.
About Luis Carlos Coelho
Economista pela Universidade de Mogi das Cruzes
Translated from the original in Portuguese by Lilian Nassi-Calò.
Como citar este post [ISO 690/2010]:















Recent Comments