{"id":5804,"date":"2026-01-19T10:00:13","date_gmt":"2026-01-19T13:00:13","guid":{"rendered":"https:\/\/blog.scielo.org\/en\/?p=5804"},"modified":"2026-01-19T16:14:29","modified_gmt":"2026-01-19T19:14:29","slug":"who-is-the-midwife-and-who-is-the-parturient-the-maieutic-perspective-for-rethinking-authorship-and-epistemic-responsibility-in-the-use-of-ai-in-scientific-output","status":"publish","type":"post","link":"https:\/\/blog.scielo.org\/en\/2026\/01\/19\/who-is-the-midwife-and-who-is-the-parturient-the-maieutic-perspective-for-rethinking-authorship-and-epistemic-responsibility-in-the-use-of-ai-in-scientific-output\/","title":{"rendered":"Who is the midwife and who is the parturient? The maieutic perspective for rethinking authorship and epistemic responsibility in the use of AI in scientific output"},"content":{"rendered":"<h3>By <strong>Ricardo Limongi Fran\u00e7a Coelho and Luis Carlos Coelho<\/strong><\/h3>\n<h3>Introduction<\/h3>\n<div id=\"attachment_5808\" style=\"width: 250px\" class=\"wp-caption alignright\"><a href=\"http:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2026\/01\/liah-martens-5H4VLB2Vq_Q-unsplash.jpg\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-5808\" class=\"wp-image-5808 size-medium\" title=\"Photograph of a hand touching a lamp.\" src=\"http:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2026\/01\/liah-martens-5H4VLB2Vq_Q-unsplash-240x300.jpg\" alt=\"Photograph of a hand touching a lamp.\" width=\"240\" height=\"300\" srcset=\"https:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2026\/01\/liah-martens-5H4VLB2Vq_Q-unsplash-240x300.jpg 240w, https:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2026\/01\/liah-martens-5H4VLB2Vq_Q-unsplash-768x960.jpg 768w, https:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2026\/01\/liah-martens-5H4VLB2Vq_Q-unsplash.jpg 800w\" sizes=\"auto, (max-width: 240px) 100vw, 240px\" \/><\/a><p id=\"caption-attachment-5808\" class=\"wp-caption-text\"><em>Imagem: <a href=\"https:\/\/unsplash.com\/pt-br\/fotografias\/pessoa-tocando-lampada-5H4VLB2Vq_Q\"><br \/>Liah Martens by Unsplash<\/a>.<\/em><\/p><\/div>\n<p>The emergence of generative artificial intelligence in scientific output has sparked intense debates about integrity, transparency, and, above all, authorship. Recent studies indicate that the use of language models in scientific articles has grown exponentially: analyses of large corpora have identified evidence of AI modification in up to 22.5% of computer science abstracts. High-impact scientific journals, publishers, and publication ethics organizations have taken a relatively consensual position: AI systems cannot be listed as authors. However, beyond normative guidelines, a fundamental philosophical question remains: what is the nature of the relationship between researchers and AI in the knowledge production process?<\/p>\n<p>This post proposes that the Socratic maieutic perspective, the method of \u201cgiving birth to ideas\u201d,offers a fruitful conceptual framework for rethinking this relationship. More than a tool for extracting information, AI can be understood as a dialogical interlocutor that helps researchers articulate knowledge that is already gestating within them.<\/p>\n<h3>Maieutics as a metaphor for knowledge production<\/h3>\n<p>The term \u201cmaieutics\u201d derives from the Greek <em>maieutik\u00f3s<\/em>, which means \u201cthe art of midwifery\u201d. Socrates used this metaphor deliberately: just as his mother, Phaenarete, was a midwife and helped women give birth to children, he saw himself as a midwife of ideas, helping his interlocutors give birth to knowledge they already carried within themselves but had not yet articulated or become conscious of.<\/p>\n<p>The method works essentially through successive questions. Instead of transmitting knowledge directly, Socrates questioned his interlocutors in order to make them examine their own beliefs, identify contradictions in their reasoning and gradually arrive at more refined understandings. The underlying assumption is that genuine knowledge cannot simply be deposited in someone from outside but must emerge from an active process of reflection and personal discovery.<\/p>\n<p>This perspective has profound implications for the discussion of AI and scientific authorship. If genuine knowledge emerges from a dialogical process of questioning and reflection, then the central question is not whether AI participated in the text\u2019s production, but how it participated and, fundamentally, who remains responsible for the knowledge produced.<\/p>\n<h3>From extraction to reflection: repositioning the researcher-AI relationship<\/h3>\n<p>The predominant use of generative AI systems in academia has followed an instrumental logic: AI is treated as an oracle that provides ready-made answers \u2014 texts, abstracts, analyses, codes. This approach, which we could call \u201cextractivist\u201d, positions the researcher as a passive consumer of machine-generated outputs. As Birhane <em>et al<\/em>. (2023) warn in the article <em>Science in the age of large language models<\/em><a id=\"nt1\" href=\"#rf1\"><sup>1<\/sup><\/a>, published in Nature Reviews Physics, large-scale language models introduce significant risks to science, including the possibility of erosion of critical thinking and originality.<\/p>\n<p>The maieutic perspective suggests a radical repositioning of this relationship. Instead of extracting answers, researchers can use AI as a dialogical partner that helps them clarify, examine, and refine ideas in development. The iterative process of prompting thus becomes a contemporary form of Socratic dialogue: each AI response is not an end point, but a stimulus for further reflection.<\/p>\n<p>From this perspective, AI takes on the role of a \u201cmidwife\u201d: it asks questions, offers alternative perspectives, identifies gaps in reasoning, and challenges assumptions. But it is still the researcher who \u201cdelivers\u201d the knowledge. Epistemic responsibility, the ability and obligation to account for the knowledge produced remains non-transferable. As Lloyd (2025) argues in <em>Epistemic responsibility: toward a community standard for human-AI collaborations<\/em><a id=\"nt2\" href=\"#rf2\"><sup>2<\/sup><\/a>, it is necessary to develop community standards for human-AI collaborations that preserve this epistemic responsibility.<\/p>\n<h3>Authorship and epistemic responsibility: the emerging consensus<\/h3>\n<p>The leading organizations on ethics in scientific publishing converge on one fundamental point: AI systems cannot be authors because they cannot take responsibility. The COPE Council (2023)<a id=\"nt3\" href=\"#rf3\"><sup>3<\/sup><\/a> has established in its official position that AI tools do not meet the requirements for authorship because they \u201ccannot be held accountable for the work\u201d nor \u201cmanage issues related to accuracy, integrity, and originality.\u201d Similarly, Hosseini, Rasmussen e Resnik (2023)<a id=\"nt4\" href=\"#rf4\"><sup>4<\/sup><\/a> \u00a0argue in Using AI to write scholarly publications, that natural language processing systems cannot be considered authors because they lack moral agency.<\/p>\n<p>The journal <em>Science<\/em> was categorical in stating, through its editor-in-chief, that \u201cAI-generated text is (&#8230;) analogous to plagiarism\u201d (Thorp, 2023, p. 313)<a id=\"nt5\" href=\"#rf5\"><sup>5<\/sup><\/a>. Nature established its own guidelines, emphasizing that \u201clarge-scale language models (LLMs), such as ChatGPT, do not meet our criteria for authorship\u201d (Nature Editorial, 2023, p. 612)<a id=\"nt6\" href=\"#rf6\"><sup>6<\/sup><\/a>. These positions reflect a deep understanding of the nature of scientific authorship: being an author is not just about producing text, but about taking epistemic responsibility for the knowledge communicated.<\/p>\n<p>A bibliometric analysis conducted by Ganjavi <em>et al.<\/em> (2024), published by the BMJ, in the article Publishers&#8217; and journals&#8217; instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis<a id=\"nt7\" href=\"#rf7\"><sup>7<\/sup><\/a>,examined the guidelines of the 100 leading academic publishers and the 100 highest-ranked scientific journals. The results revealed that only 24% of publishers had specific guidelines on generative AI, but among those that did, 96-98% prohibited the attribution of authorship to AI systems. This highlights an emerging consensus that links authorship to responsibility \u2014 something that machines, by definition, cannot assume.<\/p>\n<h3>Implications for editorial policies: from prohibition to qualified transparency<\/h3>\n<p>The maieutic perspective offers a more nuanced framework for editorial policies than the simple permission\/prohibition dichotomy. What matters is not only whether AI was used, but also the nature of that use and, fundamentally, whether the researcher maintained their position as a responsible epistemic agent throughout the process.<\/p>\n<p>An analysis of the guidelines of major publishers reveals important nuances. Elsevier distinguishes between acceptable uses (language refinement, analysis support) and prohibited uses (generation of scientific images, production of text without critical supervision). Springer Nature emphasizes that any use must be documented in the methods section. As documented by Resnik e Hosseni(2024) in The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool<a id=\"nt8\" href=\"#rf8\"><sup>8<\/sup><\/a>, there is an urgent need for new ethical guidelines for the use of AI in scientific research, including recommendations ranging from a clear definition of the role of AI to mechanisms for verifying generated content.<\/p>\n<p>In the Brazilian context, Sampaio, Sabbatini e Limongi (2025) proposed in <a href=\"https:\/\/blog.scielo.org\/en\/2025\/02\/05\/brazilian-researchers-launch-guidelines-genai\/\" target=\"_blank\" rel=\"noopener\">Brazilian researchers launch Guidelines for the ethical and responsible use of Generative Artificial Intelligence (GenAI)<\/a><a id=\"nt9\" href=\"#rf9\"><sup>9<\/sup><\/a>\u00a0proposed specific guidelines for the ethical and responsible use of generative AI in academic research, emphasizing principles of transparency, responsibility, and integrity. As highlighted by Vasconcelos e Maru\u0161i\u0107 (2025) in the post <a href=\"https:\/\/blog.scielo.org\/en\/2025\/05\/07\/research-integrity-and-human-agency-in-research-gen-ai\/\" target=\"_blank\" rel=\"noopener\">Research Integrity and Human Agency in Research Intertwined with Generative<\/a> AI<a id=\"nt10\" href=\"#rf10\"><sup>10<\/sup><\/a>, published on the SciELO in Perspective Blog, the central issue is to preserve human agency in research intertwined with generative AI. These distinctions point to a common principle: transparency should not be an end in itself, but a means of assessing whether epistemic responsibility has been adequately exercised.<\/p>\n<h3>Training researchers: teaching them to engage in dialogue, not to extract<\/h3>\n<p>The maieutic perspective has direct implications for training researchers in AI-related skills\u2014what the literature has termed AI literacy. In a seminal work, What is AI Literacy? Competencies and Design Considerations <a id=\"nt11\" href=\"#rf11\"><sup>11<\/sup><\/a>,Long e Magerko (2020) defined AI literacy as \u201ca set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool\u201d (p. 2). The authors identified 17 essential competencies, organized into dimensions ranging from technical understanding to ethical evaluation.<\/p>\n<p>More than teaching how to use AI tools efficiently, it is about developing the ability to engage critically with these systems. Smith <em>et al<\/em>. (2024)<a id=\"nt12\" href=\"#rf12\"><sup>12<\/sup><\/a>\u00a0proposed \u201cten simple rules for using large-scale language models in science,\u201d published in PLoS Computational Biology, which include principles such as maintaining healthy skepticism, verifying outputs, documenting usage, and understanding the limitations of models.<\/p>\n<p>This involves specific skills: formulating questions that stimulate reflection rather than just requesting answers; critically evaluating the outputs generated; identifying biases, gaps, and inconsistencies; and, fundamentally, maintaining awareness of one&#8217;s own position as a responsible epistemic agent. These skills echo the Socratic tradition: the proper use of AI, like good philosophical dialogue, requires a willingness to question, including oneself.<\/p>\n<p>Graduate programs and training in scientific methodology need to incorporate these dimensions. It is not a question of prohibiting the use of AI or encouraging it uncritically, but of training researchers to use these technologies in a way that expands, rather than replaces, their intellectual agency.<\/p>\n<h3>Concluding remarks: AI as a midwife, not an oracle<\/h3>\n<p>The maieutic metaphor offers a conceptual framework that transcends the simplistic dichotomies of the current debate on AI in science. Neither a neutral tool to be used without restrictions nor a threat to be banned, generative AI can be understood as a dialogical interlocutor: a contemporary \u201cmidwife\u201d that helps researchers give birth to knowledge they already carry within them.<\/p>\n<p>This perspective does not resolve all tensions, but questions remain about algorithmic bias, unequal access, environmental impacts, and the risks of homogenizing scientific thought. However, it repositions the discussion about authorship on a more solid footing: the central issue is not the participation of AI, but the maintenance of the researcher&#8217;s epistemic responsibility.<\/p>\n<p>For scholarly communication community, such as editors, reviewers, journal managers, the challenge is to develop mechanisms that evaluate not only transparency in the use of AI, but also the quality of the dialogical process that this use represents. For researchers, the challenge is to learn to use AI as Socrates used dialogue: not to obtain ready-made answers, but to refine their own thinking.<\/p>\n<p>After all, in maieutics, the midwife is not the protagonist of birth.That role belongs to the person giving birth. In AI-mediated scientific output, genuine knowledge continues to be born from the researcher. AI can assist in the birth, but the responsibility for what is born remains, inescapably, human.<\/p>\n<h3>Notas<\/h3>\n<p>1.BIRHANE, A., <em>et al<\/em>. Science in the age of large language models. <em>Nature Reviews Physics<\/em> [online]. 2023, vol. 5, pp. 277-280.[viewed 21 January 2026] <a href=\"https:\/\/doi.org\/10.1038\/s42254-023-00581-4\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1038\/s42254-023-00581-4<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/s42254-023-00581-4\" target=\"_blank\" rel=\"noopener\">https:\/\/www.nature.com\/articles\/s42254-023-00581-4<\/a> <a id=\"rf1\" href=\"#nt1\">\u21a9<\/a><\/p>\n<p>2.LLOYD, D. Epistemic responsibility: toward a community standard for human-AI collaborations. <em>Frontiers in Artificial Intelligence<\/em> [online]. 2025, vol. 8, 1635691. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.3389\/frai.2025.1635691\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.3389\/frai.2025.1635691<\/a>. Available from: <a href=\"https:\/\/www.frontiersin.org\/journals\/artificial-intelligence\/articles\/10.3389\/frai.2025.1635691\/full\" target=\"_blank\" rel=\"noopener\">https:\/\/www.frontiersin.org\/journals\/artificial-intelligence\/articles\/10.3389\/frai.2025.1635691\/full<\/a> <a id=\"rf2\" href=\"#nt2\">\u21a9<\/a><\/p>\n<p>3.COPE COUNCIL. COPE Authorship and AI tools [online]. Committee on Publication Ethics, 2023. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.24318\/cCVRZBms\">https:\/\/doi.org\/10.24318\/cCVRZBms<\/a>. Available from: <a href=\"https:\/\/publicationethics.org\/guidance\/cope-position\/authorship-and-ai-tools\" target=\"_blank\" rel=\"noopener\">https:\/\/publicationethics.org\/guidance\/cope-position\/authorship-and-ai-tools<\/a> <a id=\"rf3\" href=\"#nt3\">\u21a9<\/a><\/p>\n<p>4.HOSSEINI, M., RASMUSSEN, L. M., e RESNIK, D. B. Using AI to write scholarly publications. <em>Accountability in Research<\/em> [online]. 2023, vol. 31, no. 7, pp. 715-723. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1080\/08989621.2023.2168535\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1080\/08989621.2023.2168535<\/a>. Available from: <a href=\"https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/08989621.2023.2168535\" target=\"_blank\" rel=\"noopener\">https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/08989621.2023.2168535<\/a> <a id=\"rf4\" href=\"#nt4\">\u21a9<\/a><\/p>\n<p>5.THORP, H. H. ChatGPT is fun, but not an author. <em>Science<\/em> [online]. 2023, vol. 379, no. 6630, p. 313. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1126\/science.adg7879\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1126\/science.adg7879<\/a>. Available from: <a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.adg7879\" target=\"_blank\" rel=\"noopener\">https:\/\/www.science.org\/doi\/10.1126\/science.adg7879<\/a> <a id=\"rf5\" href=\"#nt5\">\u21a9<\/a><\/p>\n<p>6.Nature Editorial. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. <em>Nature<\/em> [online]. 2023, vol. 613, p. 612. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1038\/d41586-023-00191-1\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1038\/d41586-023-00191-1<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/d41586-023-00191-1\" target=\"_blank\" rel=\"noopener\">https:\/\/www.nature.com\/articles\/d41586-023-00191-1<\/a> <a id=\"rf6\" href=\"#nt6\">\u21a9<\/a><\/p>\n<p>7.GANJAVI, C., <em>et al<\/em>. Publishers&#8217; and journals&#8217; instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. <em>The BMJ<\/em> [online]. 2024, vol. 384, e077192. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1136\/bmj-2023-077192\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1136\/bmj-2023-077192<\/a>. Available from: <a href=\"https:\/\/www.bmj.com\/content\/384\/bmj-2023-077192\" target=\"_blank\" rel=\"noopener\">https:\/\/www.bmj.com\/content\/384\/bmj-2023-077192<\/a> <a id=\"rf7\" href=\"#nt7\">\u21a9<\/a><\/p>\n<p>8.RESNIK, D. B., e HOSSEINI, M. The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. <em>AI Ethics<\/em> [online]. 2024, vol. 5, no. 2, pp. 1499-1521. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1007\/s43681-024-00493-8\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1007\/s43681-024-00493-8<\/a>. Available from: <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s43681-024-00493-8\" target=\"_blank\" rel=\"noopener\">https:\/\/link.springer.com\/article\/10.1007\/s43681-024-00493-8<\/a> <a id=\"rf8\" href=\"#nt8\">\u21a9<\/a><\/p>\n<p>9.SAMPAIO, R. C., SABBATINI, M., e LIMONGI, R. Pesquisadores brasileiros lan\u00e7am diretrizes para o uso \u00e9tico e respons\u00e1vel da Intelig\u00eancia Artificial Generativa (IAG) [online]. <em>SciELO em Perspectiva<\/em>, 2025. [viewed 21 January 2026]. Available from: <a href=\"https:\/\/blog.scielo.org\/blog\/2025\/02\/05\/pesquisadores-lancam-diretrizes-iag\/\" target=\"_blank\" rel=\"noopener\">https:\/\/blog.scielo.org\/blog\/2025\/02\/05\/pesquisadores-lancam-diretrizes-iag\/<\/a> <a id=\"rf9\" href=\"#nt9\">\u21a9<\/a><\/p>\n<p>10.VASCONCELOS, S., e MARU\u0160I\u0106, A. Integridade cient\u00edfica e ag\u00eancia humana na pesquisa entremeada por IA Generativa [online]. <em>SciELO em Perspectiva<\/em>. 2025 [viewed 21 January 2026] Available from: <a href=\"https:\/\/blog.scielo.org\/blog\/2025\/05\/07\/integridade-cientifica-e-agencia-humana-na-pesquisa-ia-gen\/\" target=\"_blank\" rel=\"noopener\">https:\/\/blog.scielo.org\/blog\/2025\/05\/07\/integridade-cientifica-e-agencia-humana-na-pesquisa-ia-gen\/<\/a> <a id=\"rf10\" href=\"#nt10\">\u21a9<\/a><\/p>\n<p>11.LONG, D., e MAGERKO, B. What is AI Literacy? Competencies and Design Considerations. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. New York:, 2020 [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1145\/3313831.3376727\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1145\/3313831.3376727<\/a>. Available from: <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3313831.3376727\" target=\"_blank\" rel=\"noopener\">https:\/\/dl.acm.org\/doi\/10.1145\/3313831.3376727<\/a> <a id=\"rf11\" href=\"#nt11\">\u21a9<\/a><\/p>\n<p>12.SMITH, G. R., <em>et al<\/em>. Ten simple rules for using large language models in science, version 1.0. PLoS <em>Computational Biology<\/em> [online]. 2024, vol. 20, n.\u00ba 1, e1011767. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1371\/journal.pcbi.1011767\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1371\/journal.pcbi.1011767<\/a>. Available from: <a href=\"https:\/\/journals.plos.org\/ploscompbiol\/article?id=10.1371\/journal.pcbi.1011767\" target=\"_blank\" rel=\"noopener\">https:\/\/journals.plos.org\/ploscompbiol\/article?id=10.1371\/journal.pcbi.1011767<\/a> <a id=\"rf12\" href=\"#nt12\">\u21a9<\/a><\/p>\n<h3>Refer\u00eancias<\/h3>\n<p>BIRHANE, A., <em>et al<\/em>. Science in the age of large language models. <em>Nature Reviews Physics<\/em> [online]. 2023, vol. 5, pp. 277-280.[viewed 21 January 2026] <a href=\"https:\/\/doi.org\/10.1038\/s42254-023-00581-4\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1038\/s42254-023-00581-4<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/s42254-023-00581-4\" target=\"_blank\" rel=\"noopener\">https:\/\/www.nature.com\/articles\/s42254-023-00581-4<\/a><\/p>\n<p>COPE COUNCIL. COPE Authorship and AI tools [online]. Committee on Publication Ethics, 2023. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.24318\/cCVRZBms\">https:\/\/doi.org\/10.24318\/cCVRZBms<\/a>. Available from: <a href=\"https:\/\/publicationethics.org\/guidance\/cope-position\/authorship-and-ai-tools\" target=\"_blank\" rel=\"noopener\">https:\/\/publicationethics.org\/guidance\/cope-position\/authorship-and-ai-tools<\/a><\/p>\n<p>GANJAVI, C., <em>et al<\/em>. Publishers&#8217; and journals&#8217; instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. <em>The BMJ<\/em> [online]. 2024, vol. 384, e077192. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1136\/bmj-2023-077192\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1136\/bmj-2023-077192<\/a>. Available from: <a href=\"https:\/\/www.bmj.com\/content\/384\/bmj-2023-077192\" target=\"_blank\" rel=\"noopener\">https:\/\/www.bmj.com\/content\/384\/bmj-2023-077192<\/a><\/p>\n<p>HOSSEINI, M., RASMUSSEN, L. M., e RESNIK, D. B. Using AI to write scholarly publications. <em>Accountability in Research<\/em> [online]. 2023, vol. 31, no. 7, pp. 715-723. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1080\/08989621.2023.2168535\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1080\/08989621.2023.2168535<\/a>. Available from: <a href=\"https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/08989621.2023.2168535\" target=\"_blank\" rel=\"noopener\">https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/08989621.2023.2168535<\/a><\/p>\n<p>LIANG, W., <em>et al<\/em>. Quantifying large language model usage in scientific papers. <em>Nature Human Behaviour<\/em> [online]. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1038\/s41562-025-02273-8\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1038\/s41562-025-02273-8<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/s41562-025-02273-8\" target=\"_blank\" rel=\"noopener\">https:\/\/www.nature.com\/articles\/s41562-025-02273-8<\/a><\/p>\n<p>LLOYD, D. Epistemic responsibility: toward a community standard for human-AI collaborations. <em>Frontiers in Artificial Intelligence<\/em> [online]. 2025, vol. 8, 1635691. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.3389\/frai.2025.1635691\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.3389\/frai.2025.1635691<\/a>. Available from: <a href=\"https:\/\/www.frontiersin.org\/journals\/artificial-intelligence\/articles\/10.3389\/frai.2025.1635691\/full\" target=\"_blank\" rel=\"noopener\">https:\/\/www.frontiersin.org\/journals\/artificial-intelligence\/articles\/10.3389\/frai.2025.1635691\/full<\/a><\/p>\n<p>LONG, D., e MAGERKO, B. What is AI Literacy? Competencies and Design Considerations. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. New York:, 2020 [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1145\/3313831.3376727\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1145\/3313831.3376727<\/a>. Available from: <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3313831.3376727\" target=\"_blank\" rel=\"noopener\">https:\/\/dl.acm.org\/doi\/10.1145\/3313831.3376727<\/a><\/p>\n<p>Nature Editorial. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. <em>Nature<\/em> [online]. 2023, vol. 613, p. 612. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1038\/d41586-023-00191-1\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1038\/d41586-023-00191-1<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/d41586-023-00191-1\" target=\"_blank\" rel=\"noopener\">https:\/\/www.nature.com\/articles\/d41586-023-00191-1<\/a><\/p>\n<p>RESNIK, D. B., e HOSSEINI, M. The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. <em>AI Ethics<\/em> [online]. 2024, vol. 5, no. 2, pp. 1499-1521. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1007\/s43681-024-00493-8\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1007\/s43681-024-00493-8<\/a>. Available from: <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s43681-024-00493-8\" target=\"_blank\" rel=\"noopener\">https:\/\/link.springer.com\/article\/10.1007\/s43681-024-00493-8<\/a><\/p>\n<p>SAMPAIO, R. C., SABBATINI, M., e LIMONGI, R. Pesquisadores brasileiros lan\u00e7am diretrizes para o uso \u00e9tico e respons\u00e1vel da Intelig\u00eancia Artificial Generativa (IAG) [online]. <em>SciELO em Perspectiva<\/em>, 2025. [viewed 21 January 2026]. Available from: <a href=\"https:\/\/blog.scielo.org\/blog\/2025\/02\/05\/pesquisadores-lancam-diretrizes-iag\/\" target=\"_blank\" rel=\"noopener\">https:\/\/blog.scielo.org\/blog\/2025\/02\/05\/pesquisadores-lancam-diretrizes-iag\/<\/a><\/p>\n<p>SMITH, G. R., <em>et al<\/em>. Ten simple rules for using large language models in science, version 1.0. PLoS <em>Computational Biology<\/em> [online]. 2024, vol. 20, n.\u00ba 1, e1011767. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1371\/journal.pcbi.1011767\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1371\/journal.pcbi.1011767<\/a>. Available from: <a href=\"https:\/\/journals.plos.org\/ploscompbiol\/article?id=10.1371\/journal.pcbi.1011767\" target=\"_blank\" rel=\"noopener\">https:\/\/journals.plos.org\/ploscompbiol\/article?id=10.1371\/journal.pcbi.1011767<\/a><\/p>\n<p>THORP, H. H. ChatGPT is fun, but not an author. <em>Science<\/em> [online]. 2023, vol. 379, no. 6630, p. 313. [viewed 21 January 2026]. <a href=\"https:\/\/doi.org\/10.1126\/science.adg7879\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1126\/science.adg7879<\/a>. Available from: <a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.adg7879\" target=\"_blank\" rel=\"noopener\">https:\/\/www.science.org\/doi\/10.1126\/science.adg7879<\/a><\/p>\n<p>VASCONCELOS, S., e MARU\u0160I\u0106, A. Integridade cient\u00edfica e ag\u00eancia humana na pesquisa entremeada por IA Generativa [online]. <em>SciELO em Perspectiva<\/em>. 2025 [viewed 21 January 2026] Available from: <a href=\"https:\/\/blog.scielo.org\/blog\/2025\/05\/07\/integridade-cientifica-e-agencia-humana-na-pesquisa-ia-gen\/\" target=\"_blank\" rel=\"noopener\">https:\/\/blog.scielo.org\/blog\/2025\/05\/07\/integridade-cientifica-e-agencia-humana-na-pesquisa-ia-gen\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<h3>About Ricardo Limongi Fran\u00e7a Coelho<\/h3>\n<p>Professor of Marketing and Artificial Intelligence, Universidade Federal de Goi\u00e1s (UFG), Goi\u00e2nia\u2013GO,and Editor-in-Chief of Brazilian<a href=\"https:\/\/blog.scielo.org\/wp-content\/uploads\/2026\/01\/IMG_9481.jpeg\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"alignright wp-image-8162 size-full\" title=\"Photograph of Ricardo Limongi Fran\u00e7a Coelho\" src=\"https:\/\/blog.scielo.org\/wp-content\/uploads\/2026\/01\/IMG_9481.jpeg\" alt=\"Photograph of Ricardo Limongi Fran\u00e7a Coelho\" width=\"150\" height=\"150\" \/><\/a> Administration Review (BAR) da ANPAD journal, DT-CNPq Scholarship Recipient.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h3>About Luis Carlos Coelho<\/h3>\n<p><a href=\"http:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2026\/01\/luis_carlos.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"alignright wp-image-5822 size-full\" title=\"Fotografia de Luis Carlos Coelho\" src=\"http:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2026\/01\/luis_carlos.png\" alt=\"Fotografia de Luis Carlos Coelho\" width=\"150\" height=\"150\" \/><\/a><\/p>\n<p>Economista pela Universidade de Mogi das Cruzes<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>Translated from the original in <a href=\"https:\/\/blog.scielo.org\/blog\/2026\/01\/21\/quem-e-o-parteir\u2026ducao-cientifica\/\" target=\"_blank\" rel=\"noopener\">Portuguese<\/a> by <span style=\"font-weight: 400;\">Lilian Nassi-Cal\u00f2<\/span><span style=\"font-weight: 400;\">.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Socratic maieutic perspective offers a philosophical framework for rethinking the use of AI in scientific output. Instead of an oracle that provides answers, AI can be a dialogical partner that helps researchers to make latent knowledge explicit and thus reposition the discussion about authorship: the researcher remains the responsible epistemic agent, while AI acts as an intellectual midwife. <span class=\"ellipsis\">&hellip;<\/span> <span class=\"more-link-wrap\"><a href=\"https:\/\/blog.scielo.org\/en\/2026\/01\/19\/who-is-the-midwife-and-who-is-the-parturient-the-maieutic-perspective-for-rethinking-authorship-and-epistemic-responsibility-in-the-use-of-ai-in-scientific-output\/\" class=\"more-link\"><span>Read More &rarr;<\/span><\/a><\/span><\/p>\n","protected":false},"author":142,"featured_media":5808,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","_links_to":"","_links_to_target":""},"categories":[3],"tags":[84,86],"class_list":["post-5804","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-analysis","tag-artificial-intelligence","tag-ethics-in-scientific-communication"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/posts\/5804","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/users\/142"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/comments?post=5804"}],"version-history":[{"count":15,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/posts\/5804\/revisions"}],"predecessor-version":[{"id":5823,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/posts\/5804\/revisions\/5823"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/media\/5808"}],"wp:attachment":[{"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/media?parent=5804"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/categories?post=5804"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/tags?post=5804"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}