{"id":5612,"date":"2025-05-07T10:00:04","date_gmt":"2025-05-07T13:00:04","guid":{"rendered":"https:\/\/blog.scielo.org\/en\/?p=5612"},"modified":"2025-05-07T09:03:20","modified_gmt":"2025-05-07T12:03:20","slug":"research-integrity-and-human-agency-in-research-gen-ai","status":"publish","type":"post","link":"https:\/\/blog.scielo.org\/en\/2025\/05\/07\/research-integrity-and-human-agency-in-research-gen-ai\/","title":{"rendered":"Research Integrity and Human Agency in Research Intertwined with Generative AI"},"content":{"rendered":"<p><strong>By Sonia Vasconcelos and Ana Maru\u0161i\u0107<\/strong><\/p>\n<div id=\"attachment_5614\" style=\"width: 310px\" class=\"wp-caption alignright\"><a href=\"http:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2025\/05\/bernd-dittrich-1EhmvvIWNcg-unsplash.jpg\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-5614\" class=\"wp-image-5614 size-medium\" title=\"Photograph of a whiteboard with \u201cZCL-LLM-OPENAI\u201d written on it.\" src=\"http:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2025\/05\/bernd-dittrich-1EhmvvIWNcg-unsplash-300x180.jpg\" alt=\"Photograph of a whiteboard with \u201cZCL-LLM-OPENAI\u201d written on it.\" width=\"300\" height=\"180\" srcset=\"https:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2025\/05\/bernd-dittrich-1EhmvvIWNcg-unsplash-300x180.jpg 300w, https:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2025\/05\/bernd-dittrich-1EhmvvIWNcg-unsplash-768x461.jpg 768w, https:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2025\/05\/bernd-dittrich-1EhmvvIWNcg-unsplash-150x90.jpg 150w, https:\/\/blog.scielo.org\/en\/wp-content\/uploads\/sites\/2\/2025\/05\/bernd-dittrich-1EhmvvIWNcg-unsplash.jpg 1000w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><p id=\"caption-attachment-5614\" class=\"wp-caption-text\"><em>Image: <a href=\"https:\/\/unsplash.com\/pt-br\/fotografias\/um-close-up-de-uma-parede-branca-com-escrita-nela-1EhmvvIWNcg\">Bernd Dittrich via Unsplash<\/a>.<\/em><\/p><\/div>\n<p>Since the popularization of Generative Artificial Intelligence (Gen AI) models, especially OpenAI&#8217;s <a href=\"https:\/\/openai.com\/index\/chatgpt\/\" target=\"_blank\" rel=\"noopener\">ChatGPT<\/a>, at the end of 2022, there has been a gradual and profound transformation in the research endeavor (<a href=\"https:\/\/doi.org\/10.1038\/d41586-023-00191-1\" target=\"_blank\" rel=\"noopener\"><em>Nature<\/em>, 2023<\/a>; <a href=\"https:\/\/doi.org\/10.1038\/d41586-023-03930-6\" target=\"_blank\" rel=\"noopener\">Noorden, R.V. and Webb, R., 2023<\/a>). Underpinning this ongoing change is the ethical regulation of Gen AI use, considering its disruptive potential for science at large, with unprecedented influence on scientific communication. The increasing autonomy of these systems and their interaction with what we call human agency (<a href=\"https:\/\/doi.org\/10.1037\/0003-066X.44.9.1175\" target=\"_blank\" rel=\"noopener\">Bandura, A., 1989<\/a>; <a href=\"https:\/\/www.pewresearch.org\/wp-content\/uploads\/sites\/20\/2023\/02\/PI_2023.02.24_The-Future-of-Human-Agency_FINAL.pdf\" target=\"_blank\" rel=\"noopener\">Pew Research Center, 2023<\/a>) is one of the sensitive points in this process.<\/p>\n<p>It can be argued that scientific communication is undergoing a reconfiguration process that, from a conservative perspective, is as paradigmatic as the one triggered by the creation of the first scientific journal, <em>Philosophical Transactions<\/em>, in 1665, in England. From a more disruptive perspective, this transformation will reconfigure the entire scientific culture, redefining the autonomy of scientists and institutions in the production and certification of knowledge\u2014an impact whose dimension is still not possible to estimate.<\/p>\n<p>This more radical view posits that as Gen AI becomes more integrated into research activities and society at large, it goes beyond merely being a new tool or adapting existing scientific practices to a new technical framework. The incorporation of Gen AI in science (regardless of the debates over the term) suggests a fundamental shift in the foundations of scientific activity, potentially transforming both the methods and goals of scientific research. Also, it is dual-use technology, and as such, it can serve both beneficial and harmful purposes.<\/p>\n<p>According to the report <a href=\"https:\/\/ide.mit.edu\/wp-content\/uploads\/2024\/04\/Davos-Report-Draft-XFN-Copy-01112024-Print-Version.pdf?x76181\" target=\"_blank\" rel=\"noopener\">Generally Faster: The Economic Impact of Generative AI<\/a><sup><a id=\"nt1\" href=\"#rf1\">1<\/a><\/sup> (McAfee, A., 2024), Gen AI constitutes a technology capable of generating original content, continuously improving through its own usage, and exhibiting widespread economic and social impacts more rapidly than previous technologies. As described by Krakowski in <a href=\"https:\/\/doi.org\/10.1016\/j.infoandorg.2025.100560\" target=\"_blank\" rel=\"noopener\">Human-AI agency in the Age of Generative AI<\/a><sup><a id=\"nt2\" href=\"#rf2\">2<\/a><\/sup>(2025),<\/p>\n<blockquote><p>unlike predictive AI, which often requires technical expertise and infrastructure to implement and use effectively, GenAI&#8217;s pretrained nature and natural-language interfaces lower adoption barriers. This allows humans without IT [Information Technology], data science, or programming skills to engage with AI systems [\u2026]. The accessibility and affordability of contemporary frontier models, such as GPT-4o for general-purpose multimodal uses, <a href=\"https:\/\/github.com\/features\/copilot\" target=\"_blank\" rel=\"noopener\">GitHub Copilot<\/a> for coding and software development, or <a href=\"https:\/\/pi.ai\/talk\" target=\"_blank\" rel=\"noopener\">Pi<\/a> for tasks involving social and emotional dimensions, along with open models like <a href=\"https:\/\/www.llama.com\/\" target=\"_blank\" rel=\"noopener\">Llama 3<\/a> (Meta) and <a href=\"https:\/\/chat.deepseek.com\/sign_in\" target=\"_blank\" rel=\"noopener\">DeepSeek-R1<\/a> (DeepSeek AI), have made advanced AI capabilities widely available at little to no cost.<sup><a id=\"nt2\" href=\"#rf2\">2<\/a><\/sup><\/p><\/blockquote>\n<p>In this reshaping technological environment that intersects human and artificial intelligence (<a href=\"https:\/\/www.ibm.com\/thought-leadership\/institute-business-value\/en-us\/report\/generative-ai-experience-design\" target=\"_blank\" rel=\"noopener\">IBM IBV, 2024<\/a>), the potential autonomy of Gen AI models and systems necessitates critical anticipation of their implications, including within the scientific context (<a href=\"https:\/\/erc.europa.eu\/sites\/default\/files\/2023-12\/AI_in_science.pdf\" target=\"_blank\" rel=\"noopener\">European Research Council, ECR, 2023<\/a>).<\/p>\n<p>Eric Schmidt, former CEO and executive director of Google, along with Henry Kissinger and Daniel Huttenlocher, advocate for proactive and strategic ethical regulation of AI applications in their book <em>Genesis: Artificial Intelligence, Hope, and the Human Spirit<\/em><sup><a id=\"nt3\" href=\"#rf3\">3<\/a><\/sup> (Kissinger, H., <em>et al<\/em>., 2024), with unprecedented implications for governance in countries concerning, for example, the autonomy of Gen AI systems. As highlighted by Kissinger, <em>et al.<\/em> (2024),<\/p>\n<blockquote><p>AI\u2019s future faculties, operating at inhuman speeds, will render traditional regulation useless&#8230; we have very little independent ability to verify AI models\u2019 internal workings, let alone their intentions. We will need a fundamentally new form of control.<sup><a id=\"nt3\" href=\"#rf3\">3<\/a><\/sup><\/p><\/blockquote>\n<p>Schmidt in <a href=\"https:\/\/nationalcioreview.com\/video\/eric-schmidt-on-ais-future-infinite-context-autonomous-agents-and-global-regulation\/\" target=\"_blank\" rel=\"noopener\"><em>Eric Schmidt on AI\u2019s Future: Infinite Context, Autonomous Agents, and Global Regulation<\/em><\/a><sup><a id=\"nt4\" href=\"#rf4\">4<\/a><\/sup>(2024) envisions a future where millions of AI agents will be capable of learning, evolving, and collaborating among themselves, comparing this scenario to a &#8220;GitHub for AI.&#8221; In his perspective, these agents would operate as autonomous systems, driving what he refers to as exponential innovations across various domains.<\/p>\n<p>However, Schmidt cautions that as these AI agents communicate independently, &#8220;they may develop their own protocols or languages&#8221;, creating risks that are difficult for humans to manage and understand: &#8220;At some point, they\u2019ll develop their own language. When that happens, we may not understand what they\u2019re doing, and that\u2019s when you pull the plug&#8221;.<\/p>\n<p>Complementing this vision, McAfee<sup><a id=\"nt1\" href=\"#rf1\">1<\/a><\/sup> (2024) emphasizes that much of the infrastructure needed for Gen AI to function is already widely available, accelerating its impact compared to previous technologies. In Gen AI, access is not restricted to developers only, but also to direct users, speeding up the transformation at an unprecedented pace.<\/p>\n<h3>Mixed feelings about the impact of Generative AI on the research activity<\/h3>\n<p>In the academic realm, the uses and perceptions of its impact on scientific activity still vary widely among researchers. The <a href=\"https:\/\/www.wiley.com\/content\/dam\/wiley-dotcom\/en\/b2c\/content-fragments\/explanaitions-ai-report\/pdfs\/Wiley_ExplanAItions_AI_Study_February_2025.pdf\" target=\"_blank\" rel=\"noopener\"><em>ExplanAItions<\/em><\/a><sup><a id=\"nt5\" href=\"#rf5\">5<\/a><\/sup> survey by Wiley, described about three months ago in <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-00343-5\" target=\"_blank\" rel=\"noopener\"><em>How are Researchers Using AI? Survey Reveals Pros and Cons for Science<\/em><\/a>,<sup><a id=\"nt6\" href=\"#rf6\">6<\/a><\/sup> and published on <em>Nature<\/em> (Nadaff, M., 2025), illustrates this observation. It involved 4,946 researchers from different fields and over 70 countries and showed that the use of Gen AI as part of the research \u00a0activity is still limited.<\/p>\n<p>The survey found that only 45% of respondents (1,043 researchers) used Gen AI to assist in their research, primarily focusing on translation, drafting, and manuscript review. Additionally, 81% of these researchers had used OpenAI\u2019s ChatGPT for personal or professional purposes, but only about a third were familiar with other Gen AI tools, such as Google&#8217;s <a href=\"https:\/\/gemini.google.com\/\" target=\"_blank\" rel=\"noopener\">Gemini<\/a> and Microsoft&#8217;s <a href=\"https:\/\/copilot.microsoft.com\/\" target=\"_blank\" rel=\"noopener\">Copilot<\/a>. There were variations between countries and disciplines, with computer scientists being, naturally, more likely to use Gen AI in these activities.<\/p>\n<p>Based on this survey, Nadaff<sup><a id=\"nt6\" href=\"#rf6\">6<\/a><\/sup> (2025) reports that researchers remain skeptical about the capabilities of Gen AI for more complex tasks in the research process, such as identifying literature gaps or recommending reviewers. Most participants believe that these and other tasks in science are better performed by humans.<\/p>\n<p>&#8220;Although 64% of respondents are open to using AI for these tasks in the next two years, the majority thinks that humans still outperform AI in these areas.&#8221;<sup><a id=\"nt6\" href=\"#rf6\">6<\/a><\/sup> Besides this issue, &#8220;[r]esearchers are also concerned about the safety of using these tools 81% of respondents said they had concerns about AI\u2019s accuracy, potential , privacy risks and the lack of transparency in how these tools are trained.&#8221;<sup><a id=\"nt6\" href=\"#rf6\">6<\/a><\/sup><\/p>\n<p>The survey <a href=\"https:\/\/erc.europa.eu\/sites\/default\/files\/2023-12\/AI_in_science.pdf%20\" target=\"_blank\" rel=\"noopener\"><em>Foresight: Use and Impact of Artificial Intelligence in the Scientific Process<\/em><\/a><sup><a id=\"nt7\" href=\"#rf7\">7<\/a><\/sup> on AI usage in scientific practice conducted by the European Research Council (2023), based on a population of researchers associated with 1,046 projects registered with the agency, identified uses of Gen AI for drafting and editing, translating texts, coding, programming, and generating images, among other uses.<\/p>\n<p>The ERC report (2023) describes that in the life sciences, whose participants represent 18% of the total survey respondents, &#8220;AI methods, for instance, to understand individual differences in large cohorts, and to make predictions about diagnosis or outcome of targeted therapies&#8221;<sup><a id=\"nt7\" href=\"#rf7\">7<\/a><\/sup> and that &#8220;[AI tools are seen n as an essential support to analyze datasets of genomic, epigenomic and transcriptomic data\u2026&#8221;<sup><a id=\"nt7\" href=\"#rf7\">7<\/a><\/sup> in addition to comparing different stages of a given disease, for instance.<\/p>\n<p>For participants in the social sciences and humanities, who account for 29% of total respondents,<\/p>\n<blockquote><p>neural networks and natural language processing (NLP) tools are used for a wide range of applications, e.g. models for handwritten text recognition and automatic speech recognition, or the automatic classification of musical compositions&#8230; AI is used to identify vocal biomarkers of stress in voice samples, to detect extreme speech in online discussions&#8230; for model-based data analysis for decoding and comparing mental representations in the brain or predicting\/simulating human learning&#8230;<sup><a id=\"nt7\" href=\"#rf7\">7<\/a><\/sup><\/p><\/blockquote>\n<p>However, as in the Wiley survey reported by Nadaff<sup><a id=\"nt6\" href=\"#rf6\">6<\/a><\/sup> (2025), the ERC (2023) survey reveals that &#8220;[e]xpectations regarding the use of AI for scientific discovery, however, varied among respondents.&#8221;<sup><a id=\"nt7\" href=\"#rf7\">7<\/a><\/sup><\/p>\n<p>In the article <a href=\"https:\/\/doi.org\/10.1038\/s44319-025-00424-6\" target=\"_blank\" rel=\"noopener\"><em>Gen AI and Research Integrity: Where to Now?: The Integration of Generative AI in the Research Process Challenges Well-Established Definitions of Research Integrity<\/em><\/a>,<sup><a id=\"nt8\" href=\"#rf8\">8<\/a><\/sup> recently published in <em>EMBO Reports<\/em> (Vasconcelos, S. and Maru\u0161i\u0107, A, 2025), we explored other aspects of the research activity involving Gen AI. We emphasize the importance of developing strategies that prioritize human agency and oversight in this collaboration between humans and Gen AI in research. An intriguing question in this context is the extent of influence to be allowed in this interaction between researchers and Gen AI models in the research process.<\/p>\n<h3>Integration of Generative AI in the research activity\u2014questions about human agency and alignment<\/h3>\n<p>It is crucial for the research ecosystem, including authors, funders, and research institutions, to engage in discussions and initiatives that address alignment challenges (<a href=\"https:\/\/people.eecs.berkeley.edu\/~russell\/papers\/mi19book-hcai.pdf\" target=\"_blank\" rel=\"noopener\">Russel, S., 2019<\/a>)<sup><a id=\"nt9\" href=\"#rf9\">9<\/a><\/sup> and deployment of these models. We acknowledge that transparency in using Gen AI for scientific communication is a necessary action that has been growing within the publication system.<\/p>\n<p>However, we also understand that a comprehensive view of the problem, which impacts the definitions of research integrity established globally, is fundamental. Inspired by David Stokes&#8217;s Diagram on Scientific Research (1997), which presents the famous Pasteur&#8217;s Quadrant, we proposed a model to represent research integrity and its relationship with human agency, both individual and collective, under different conditions. Each quadrant of the model offers a distinct perspective on this relationship and its potential implications for research governance.<\/p>\n<p>This framework allows us to assess integrity from a viewpoint that reflects the level of agency and autonomy that researchers possess and that they might opt to delegate when engaging in research that involves collaboration with Gen AI. The model can be accessed in Vasconcelos and Maru\u0161i\u0107<sup><a id=\"nt8\" href=\"#rf8\">8<\/a><\/sup> (2025).<\/p>\n<p>As discussed,<\/p>\n<blockquote><p>Gen AI is already affecting every stage of the research process from formulating a hypothesis to designing experiments, data analysis, visualization and interpretation of results, writing a research paper and even peer review (<a href=\"https:\/\/doi.org\/10.1056\/AIoa2400555\" target=\"_blank\" rel=\"noopener\">Ifargan, T., <em>et al.<\/em><\/a>, 2024; <a href=\"https:\/\/doi.org\/10.1073\/pnas.2401227121\" target=\"_blank\" rel=\"noopener\">Binz, M., <em>et al.<\/em><\/a>, 2025; Naddaf, M., 2025). As it challenges both the notion of individual responsibility as well as community-standards of good research practices, integrating Gen AI into the research endeavor, while maintaining trustworthiness, has become an urgent demand in academia.<sup><a id=\"nt8\" href=\"#rf8\">8<\/a><\/sup><\/p><\/blockquote>\n<p>Citing <a href=\"https:\/\/doi.org\/10.1007\/979-8-8688-0917-0_1\" target=\"_blank\" rel=\"noopener\">Dua and Patel (2024)<\/a>, who detail the potentialities of Gen AI, we emphasize a growing consensus in the publication system on the urgent need to revisit ethical standards and verification processes in research permeated by these technologies. In the realm of experimental research, in <a href=\"https:\/\/doi.org\/10.1016\/j.cell.2024.09.022\" target=\"_blank\" rel=\"noopener\"><em>Empowering Biomedical Discovery with AI Agents<\/em><\/a>,<sup><a id=\"nt10\" href=\"#rf10\">10<\/a><\/sup> Gao, <em>et al<\/em>. (2024) envision scientific AI agents &#8220;as systems capable of skeptical learning and reasoning that empower biomedical research through collaborative agents that integrate AI models and biomedical tools with experimental platforms.&#8221;<\/p>\n<p>Gao, <em>et al<\/em>.<sup><a id=\"nt10\" href=\"#rf10\">10<\/a><\/sup> (2024) note that intersections among technological, scientific, ethical, and regulatory domains have an important role for developing effective governance structures. As mentioned earlier, as these AI agents reach higher levels of autonomy, such concerns become even more relevant.<\/p>\n<p>As pointed out by Kissinger, <em>et al.<\/em><sup><a id=\"nt3\" href=\"#rf3\">3<\/a><\/sup> (2024) and also reflected in the analyses of Gao, <em>et al.<\/em><sup><a id=\"nt10\" href=\"#rf10\">10<\/a><\/sup> (2024), dealing with this emerging autonomy represents a challenge that requires a broad and robust debate within the scientific community, especially regarding the alignment of these systems, with the goals of preserving research integrity and public trust in science. Questions about alignment and human agency are interdependent but far from trivial.<\/p>\n<p>In the <em>EMBO Reports<\/em> article,<sup><a id=\"nt8\" href=\"#rf8\">8<\/a><\/sup> we cite the 2023 report <a href=\"https:\/\/www.pewresearch.org\/wp-content\/uploads\/sites\/20\/2023\/02\/PI_2023.02.24_The-Future-of-Human-Agency_FINAL.pdf\" target=\"_blank\" rel=\"noopener\">The Future of Human Agency<\/a>.<sup><a id=\"nt11\" href=\"#rf11\">11<\/a><\/sup> In this document, Pew Research Center presents an exploration of &#8220;how much control people will retain over essential decision-making as digital systems and AI spread.&#8221; As described in the article,<sup><a id=\"nt8\" href=\"#rf8\">8<\/a><\/sup> Pew and Elon University\u2019s Imagining the Internet Center invited various stakeholders, including David J. Krieger, director of the Institute for Communication and Leadership in Lucerne, Switzerland. In Krieger&#8217;s view,<\/p>\n<blockquote><p>[i]ndividual agency is already a myth, and this will become increasingly obvious with time&#8230; Humanism attempts to preserve the myth of individual agency and enshrine it in law. Good design of socio-technical networks will need to be explicit about its post-humanist presuppositions in order to bring the issue into public debate. Humans will act in partnership\u2014that is, distributed agency\u2014with technologies of all kinds.<sup><a id=\"nt8\" href=\"#rf8\">8<\/a><\/sup><\/p><\/blockquote>\n<p>As detailed in the article, &#8220;Human agency and oversight are among the operational key requirements supporting ethical principles for AI systems established by the European Commission, as part of its <a href=\"https:\/\/research-and-innovation.ec.europa.eu\/document\/download\/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en\" target=\"_blank\" rel=\"noopener\">Living Guidelines on the Responsible Use of Generative AI in Research<\/a><sup><a id=\"nt12\" href=\"#rf12\">12<\/a><\/sup> (2024).&#8221;<sup><a id=\"nt8\" href=\"#rf8\">8<\/a><\/sup><\/p>\n<p>We also addressed the <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai\" target=\"_blank\" rel=\"noopener\"><em>Ethics Guidelines for Trustworthy AI<\/em><\/a><sup><a id=\"nt13\" href=\"#rf13\">13<\/a><\/sup> (2019). These guidelines present<\/p>\n<blockquote><p>three levels of human agency and oversight: human-in-the-loop (HITL), human-on-the-loop (HOTL), and human-in-command (HIC) approaches. HITL entails human intervention for the whole decision cycle; HOTL involves human input during the design cycle and continuous monitoring of the system&#8217;s operations; HIC encompasses human oversight of the entire activity of the AI system, including societal and ethical impacts, as well as decision-making for when and how to use the system in various contexts.<sup><a id=\"nt8\" href=\"#rf8\">8<\/a><\/sup><\/p><\/blockquote>\n<h3>Human agency and oversight are essential in the research activity, but not sufficient<\/h3>\n<p>In the <em>EMBO Reports<\/em> article,<sup><a id=\"nt8\" href=\"#rf8\">8<\/a><\/sup> alignment in human-Gen AI collaboration is explored as a sensitive issue, reflecting concerns with the culture of scientific communication over the last decades. In <a href=\"https:\/\/doi.org\/10.1038\/s41598-024-70031-3\" target=\"_blank\" rel=\"noopener\">Strong and Weak Alignment of Large Language Models with Human Values<\/a>,<sup><a id=\"nt14\" href=\"#rf14\">14<\/a><\/sup> Khamassi, <em>et al<\/em>. (2024) highlight timely issues about &#8220;strong and weak value alignment&#8221; of Large Language Models (LLMs) with human values and the complexities of AI&#8217;s understanding of these values.<\/p>\n<p>In <a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/hash\/b1efde53be364a73914f58805a001731-Abstract-Conference.html\" target=\"_blank\" rel=\"noopener\">Training Language Models to Follow Instructions with Human Feedback<\/a>,<sup><a id=\"nt15\" href=\"#rf15\">15<\/a><\/sup> Ouyang, <em>et al<\/em>. (2022) say that &#8220;apart from its caveats, alignment has a key role in fine-tuning LLMs to respond adequately to human intentions while mitigating harmful, toxic, or biased content.&#8221;<sup><a id=\"nt15\" href=\"#rf15\">15<\/a><\/sup> For a broader view of Gen AI model alignment, we cite Open AI, which introduced &#8220;deliberate alignment&#8221;: a training paradigm to ensure reasoning LLMs behave in a manner consistent with predefined safety criteria before generating responses to commands (<a href=\"https:\/\/openai.com\/index\/deliberative-alignment\/\" target=\"_blank\" rel=\"noopener\">Guan, M.Y., et al., 2024<\/a>).<\/p>\n<p>We argue that, even from the perspective of LLMs as &#8220;stochastic parrots&#8221;, a debated interpretation, adopted by <a href=\"https:\/\/doi.org\/10.1145\/3442188.3445922\" target=\"_blank\" rel=\"noopener\">Emily Bender, <em>et al<\/em>. (2021)<\/a> &#8220;to emphasize that these systems remix patterns in their training data without true understanding, alignment remains a critical issue.&#8221;<sup><a id=\"nt16\" href=\"#rf16\">16<\/a><\/sup> It is crucial for the academic community to carefully consider the role of alignment in Gen AI systems.<\/p>\n<blockquote><p>Given that training data is mostly derived from human-created data, it inherently reflects cultural patterns, worldviews, and social biases, along with the strengths and flaws of human knowledge production. As a result, Gen AI models risk reproducing\u2014or even amplifying\u2014biases in their outputs. When it comes to interacting with Gen AI models to formulate hypotheses, analyze data, or write a research report, how alignment shapes the output and behavior is a critical issue.<sup><a id=\"nt8\" href=\"#rf8\">8<\/a><\/sup> (Vasconcelos, S. and Maru\u0161i\u0107, A., 2025)<\/p><\/blockquote>\n<p>In the article, we note that many LLMs are continually trained and updated with new data in a scientific environment where the publication culture tends to favor positive or biased reports (<a href=\"https:\/\/doi.org\/10.1016\/j.jclinepi.2019.07.011\" target=\"_blank\" rel=\"noopener\">Ghannad, M., <em>et al<\/em>., 2019<\/a>; <a href=\"https:\/\/doi.org\/10.1073\/pnas.1710755115\" target=\"_blank\" rel=\"noopener\">Boutron, I. and Ravaud, P., 2018<\/a>). This tradition does not incentivize the open exposure of errors, which limits scientific transparency and reinforces biased communication norms (<a href=\"https:\/\/doi.org\/10.1038\/489007a\" target=\"_blank\" rel=\"noopener\"><em>Nature<\/em>, 2012<\/a>; <a href=\"https:\/\/www.psychologicalscience.org\/observer\/right-side-of-wrong\" target=\"_blank\" rel=\"noopener\"><em>Psychological<\/em> <em>Science Observer<\/em>, 2021<\/a>; <a href=\"https:\/\/doi.org\/10.1038\/s41467-024-54614-2\" target=\"_blank\" rel=\"noopener\"><em>Nature Communications<\/em>, 2024<\/a>).<\/p>\n<p>These elements comprise long-term structural challenges for the alignment and responsible deployment of Gen AI models, which researchers already interact with\u2014and will continue to\u2014in their research practices.<\/p>\n<h3>Notions of research integrity naturally incorporate human agency, but now it needs to be explicit<\/h3>\n<p>Research integrity is closely tied to adherence to ethical, transparent, and rigorous scientific practices, relying on individual and collective responsibility, as well described by Luiz Henrique Lopes dos Santos in <a href=\"http:\/\/doi.org\/10.21800\/2317-66602017000300002\" target=\"_blank\" rel=\"noopener\"><em>Sobre a integridade \u00e9tica da pesquisa<\/em><\/a><sup><a id=\"nt17\" href=\"#rf17\">17<\/a><\/sup> (2017).<\/p>\n<p>However, as we highlight in the article, the notion of research integrity cultivated over time assumes human agency as a given. Now, with the ongoing integration of Gen AI into all stages of the research process, it becomes crucial to place greater emphasis on &#8220;human agency&#8221; within the definitions of research integrity. We believe this emphasis is vital for ensuring informed decision-making in the emerging research landscape.<\/p>\n<p>Our proposal is that definitions of research integrity should explicitly incorporate &#8220;human agency&#8221; as an essential component for the proposal, conduct, communication, and review of research. This expanded approach should also include the development of benchmarks for research integrity in the training and deployment of Gen AI models and systems.<\/p>\n<p>In the biomedical sciences, these concerns are particularly pertinent to the governance of AI agents, but, as has been pointed out, their implications are far-reaching. It is crucial to address these sensitive issues in decision-making domains within the research enterprise, especially in the context of human collaboration with Gen AI systems. This necessity is justified by the increasing capabilities and potential of Gen AI to exert significant influence over research processes across all fields of knowledge.<\/p>\n<p>As noted in the <em>EMBO Reports<\/em> article,<sup><a id=\"nt8\" href=\"#rf8\">8<\/a><\/sup> academia should and can take a more proactive stance in seeking an understanding of the close relationship between research integrity and human agency in times of profound transformation in knowledge production. We argue that this should not be a long-term goal, as Gen AI has the potential to redefine standards and influence the reliability and culture of scientific communication among peers and to different audiences.<\/p>\n<p>By explicitly integrating human agency into definitions of research integrity, researchers and policymakers acknowledge that the autonomy of Gen AI systems must be balanced with human agency and oversight, cultivating responsible use and ethical regulatory policies for research involving human and Gen AI collaboration.<\/p>\n<h3>Notes<\/h3>\n<p>1. McAFEE, A. Generally Faster: The Economic Impact of Generative AI [online]. The MIT Initiative on the Digital Economy (IDE). 2024 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/ide.mit.edu\/wp-content\/uploads\/2024\/04\/Davos-Report-Draft-XFN-Copy-01112024-Print-Version.pdf?x76181\">https:\/\/ide.mit.edu\/wp-content\/uploads\/2024\/04\/Davos-Report-Draft-XFN-Copy-01112024-Print-Version.pdf?x76181<\/a> <a id=\"rf1\" href=\"#nt1\">\u21a9<\/a><\/p>\n<p>2. KRAKOWSKI, S. Human-AI Agency in the Age of Generative AI. <em>Information and Organization<\/em> [online]. 2025, vol. 35, no. 1, 100560, ISSN: 1471-7727 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1016\/j.infoandorg.2025.100560\">https:\/\/doi.org\/10.1016\/j.infoandorg.2025.100560<\/a>. Available from: <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1471772725000065?via%3Dihub\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1471772725000065?via%3Dihub<\/a> <a id=\"rf2\" href=\"#nt2\">\u21a9<\/a><\/p>\n<p>3. KISSINGER, H.A., SCHMIDT, E. and MUNDIE, C. Genesis: Artificial Intelligence, Hope, and the Human Spirit. New York: Little Brown and Company, 2024 <a id=\"rf3\" href=\"#nt3\">\u21a9<\/a><\/p>\n<p>4. Eric Schmidt on AI\u2019s Future: Infinite Context, Autonomous Agents, and Global Regulation [online]. The National CIO Review. 2024 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/nationalcioreview.com\/video\/eric-schmidt-on-ais-future-infinite-context-autonomous-agents-and-global-regulation\/\">https:\/\/nationalcioreview.com\/video\/eric-schmidt-on-ais-future-infinite-context-autonomous-agents-and-global-regulation\/<\/a> <a id=\"rf4\" href=\"#nt4\">\u21a9<\/a><\/p>\n<p>5. ExplanAItions: An AI Study by Wiley [online]. Wiley. 2025 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/www.wiley.com\/content\/dam\/wiley-dotcom\/en\/b2c\/content-fragments\/explanaitions-ai-report\/pdfs\/Wiley_ExplanAItions_AI_Study_February_2025.pdf\">https:\/\/www.wiley.com\/content\/dam\/wiley-dotcom\/en\/b2c\/content-fragments\/explanaitions-ai-report\/pdfs\/Wiley_ExplanAItions_AI_Study_February_2025.pdf<\/a> <a id=\"rf5\" href=\"#nt5\">\u21a9<\/a><\/p>\n<p>6.NADDAF, M. How are Researchers Using AI? Survey Reveals Pros and Cons for Science [online]. Nature. 2025 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1038\/d41586-025-00343-5\">https:\/\/doi.org\/10.1038\/d41586-025-00343-5<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-00343-5\">https:\/\/www.nature.com\/articles\/d41586-025-00343-5<\/a> <a id=\"rf6\" href=\"#nt6\">\u21a9<\/a><\/p>\n<p>7. EUROPEAN RESEARCH COUNCIL. Foresight: Use and Impact of Artificial Intelligence in the Scientific Process [online]. European Research Council. 2023 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/erc.europa.eu\/sites\/default\/files\/2023-12\/AI_in_science.pdf\">https:\/\/erc.europa.eu\/sites\/default\/files\/2023-12\/AI_in_science.pdf<\/a> <a id=\"rf7\" href=\"#nt7\">\u21a9<\/a><\/p>\n<p>8. VASCONCELOS, S. and MARU\u0160I\u0106, A. Gen AI and Research Integrity: Where to now?: The Integration of Generative AI in the Research Process Challenges Well-Established Definitions of Research Integrity. <em>EMBO Reports<\/em> [online]. 2025, vol. 26, pp. 1923\u20131928 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1038\/s44319-025-00424-6\">https:\/\/doi.org\/10.1038\/s44319-025-00424-6<\/a>. Available from: <a href=\"https:\/\/www.embopress.org\/doi\/full\/10.1038\/s44319-025-00424-6\">https:\/\/www.embopress.org\/doi\/full\/10.1038\/s44319-025-00424-6<\/a> <a id=\"rf8\" href=\"#nt8\">\u21a9<\/a><\/p>\n<p>9. In the book <a href=\"https:\/\/people.eecs.berkeley.edu\/~russell\/papers\/mi19book-hcai.pdf\" target=\"_blank\" rel=\"noopener\">Human compatible: artificial intelligence and the problem of control<\/a> by Stuart Russel (2019), the following is described: &#8220;Putting a purpose into a machine that optimizes its behavior according to clearly defined algorithms seems an admirable approach to ensuring that the machine\u2019s behavior furthers our own objectives. But, as Wiener warns, we need to put in the right purpose. We might call this the King Midas problem: Midas got exactly what he asked for\u2014namely, that everything he touched would turn to gold\u2014but, too late, he discovered the drawbacks of drinking liquid gold and eating solid gold. The technical term for putting in the right purpose is value alignment. When it fails, we may inadvertently imbue machines with objectives counter to our own.&#8221; <a id=\"rf9\" href=\"#nt9\">\u21a9<\/a><\/p>\n<p>10. GAO, S., <em>et al<\/em>. Empowering Biomedical Discovery with AI Agents. <em>Cell<\/em> [online]. 2024, vol. 187, no. 22, pp. 6125\u20136151, ISSN: 0092-8674 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1016\/j.cell.2024.09.022\">https:\/\/doi.org\/10.1016\/j.cell.2024.09.022<\/a>. Available from: <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0092867424010705\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0092867424010705<\/a> <a id=\"rf10\" href=\"#nt10\">\u21a9<\/a><\/p>\n<p>11. PEW RESEARCH CENTER. The Future of Human Agency [online]. Pew Research Center. 2023 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/www.pewresearch.org\/wp-content\/uploads\/sites\/20\/2023\/02\/PI_2023.02.24_The-Future-of-Human-Agency_FINAL.pdf\">https:\/\/www.pewresearch.org\/wp-content\/uploads\/sites\/20\/2023\/02\/PI_2023.02.24_The-Future-of-Human-Agency_FINAL.pdf<\/a> <a id=\"rf11\" href=\"#nt11\">\u21a9<\/a><\/p>\n<p>12. EUROPEAN COMMISSION. Living Guidelines on the Responsible Use of Generative AI in Research [online]. European Commission\u2014Research and innovation. 2025 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/research-and-innovation.ec.europa.eu\/document\/download\/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en\">https:\/\/research-and-innovation.ec.europa.eu\/document\/download\/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en<\/a> <a id=\"rf12\" href=\"#nt12\">\u21a9<\/a><\/p>\n<p>13. Ethics Guidelines for Trustworthy AI [online]. European Commission, official website. 2019 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai\">https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai<\/a> <a id=\"rf13\" href=\"#nt13\">\u21a9<\/a><\/p>\n<p>14. KHAMASSI, M., NAHON, M. and CHATILA, R. Strong and Weak Alignment of Large Language Models with Human Values. <em>Sci Rep<\/em> [online]. 2024, vol. 14, 19399 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1038\/s41598-024-70031-3\">https:\/\/doi.org\/10.1038\/s41598-024-70031-3<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/s41598-024-70031-3\">https:\/\/www.nature.com\/articles\/s41598-024-70031-3<\/a> <a id=\"rf14\" href=\"#nt14\">\u21a9<\/a><\/p>\n<p>15. OUYANG, L., <em>et al<\/em>. Training Language Models to Follow Instructions with Human Feedback. In: Advances in Neural Information Processing Systems 35 (NeurIPS 2022), New Orleans, 2022 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/hash\/b1efde53be364a73914f58805a001731-Abstract-Conference.html\">https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/hash\/b1efde53be364a73914f58805a001731-Abstract-Conference.html<\/a> <a id=\"rf15\" href=\"#nt15\">\u21a9<\/a><\/p>\n<p>16. BENDER, E., <em>et al<\/em>. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In: FAccT &#8217;21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, 2021 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1145\/3442188.3445922\">https:\/\/doi.org\/10.1145\/3442188.3445922<\/a>. Available from: <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445922\">https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445922<\/a> <a id=\"rf16\" href=\"#nt16\">\u21a9<\/a><\/p>\n<p>17. SANTOS, L.H.L. Sobre a integridade \u00e9tica da pesquisa. <em>Cienc. Cult.<\/em> [online]. 2017, vol.69, no.3, pp. 4\u20135, ISSN: 2317-6660 [viewed 7 May 2025]. <a href=\"http:\/\/doi.org\/10.21800\/2317-66602017000300002\">http:\/\/doi.org\/10.21800\/2317-66602017000300002<\/a>. Available from: <a href=\"http:\/\/cienciaecultura.bvs.br\/scielo.php?script=sci_arttext&amp;pid=S0009-67252017000300002\">http:\/\/cienciaecultura.bvs.br\/scielo.php?script=sci_arttext&amp;pid=S0009-67252017000300002<\/a> <a id=\"rf17\" href=\"#nt17\">\u21a9<\/a><\/p>\n<h3>References<\/h3>\n<p>BANDURA, A. Human Agency in Social Cognitive Theory. <em>American Psychologist<\/em> [online]. 1989, vol. 44, no. 9, pp. 1175\u20131184 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1037\/0003-066X.44.9.1175\">https:\/\/doi.org\/10.1037\/0003-066X.44.9.1175<\/a>. Available from: <a href=\"https:\/\/psycnet.apa.org\/doiLanding?doi=10.1037%2F0003-066X.44.9.1175\">https:\/\/psycnet.apa.org\/doiLanding?doi=10.1037%2F0003-066X.44.9.1175<\/a><\/p>\n<p>BENDER, E., <em>et al<\/em>. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In: FAccT &#8217;21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, 2021 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1145\/3442188.3445922\">https:\/\/doi.org\/10.1145\/3442188.3445922<\/a>. Available from: <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445922\">https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445922<\/a><\/p>\n<p>BINZ, M., <em>et al<\/em>. How Should the Advancement of Large Language Models Affect the Practice of Science? <em>Proc. Natl. Acad. Sci. U.S.A.<\/em> [online]. 2025, vol. 122, no. 5, e2401227121 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1073\/pnas.240122712\">https:\/\/doi.org\/10.1073\/pnas.240122712<\/a>. Available from: <a href=\"https:\/\/www.pnas.org\/doi\/10.1073\/pnas.2401227121\">https:\/\/www.pnas.org\/doi\/10.1073\/pnas.2401227121<\/a><\/p>\n<p>BOUTRON, I. E RAVAUD, P. Misrepresentation and Distortion of Research in Biomedical Literature. <em>Proc. Natl. Acad. Sci. U.S.A.<\/em> [online]. 2028, vol. 115, no. 11, pp. 2613\u20132619 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1073\/pnas.1710755115\">https:\/\/doi.org\/10.1073\/pnas.1710755115 <\/a>. Available from: <a href=\"https:\/\/www.pnas.org\/doi\/full\/10.1073\/pnas.1710755115\">https:\/\/www.pnas.org\/doi\/full\/10.1073\/pnas.1710755115 <\/a><\/p>\n<p>DUA, I.K. and PATEL, P.G. <em>An Introduction to Generative AI<\/em>. In: DUA, I.K. and PATEL, P.G. (authors) Optimizing Generative AI Workloads for Sustainability: Balancing Performance and Environmental Impact in Generative AI. New York: Apress, 2024.<\/p>\n<p>Eric Schmidt on AI&#8217;s Future: Infinite Context, Autonomous Agents, and Global Regulation [online]. The National CIO Review. 2024 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/nationalcioreview.com\/video\/eric-schmidt-on-ais-future-infinite-context-autonomous-agents-and-global-regulation\/\">https:\/\/nationalcioreview.com\/video\/eric-schmidt-on-ais-future-infinite-context-autonomous-agents-and-global-regulation\/<\/a><\/p>\n<p>Ethics Guidelines for Trustworthy AI [online]. European Commission, official website. 2019 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai\">https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai<\/a><\/p>\n<p>EUROPEAN COMMISSION. Living Guidelines on the Responsible Use of Generative AI in Research [online]. European Commission\u2014Research and innovation. 2025 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/research-and-innovation.ec.europa.eu\/document\/download\/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en\">https:\/\/research-and-innovation.ec.europa.eu\/document\/download\/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en<\/a><\/p>\n<p>EUROPEAN RESEARCH COUNCIL. Foresight: Use and Impact of Artificial Intelligence in the Scientific Process [online]. European Research Council. 2023 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/erc.europa.eu\/sites\/default\/files\/2023-12\/AI_in_science.pdf\">https:\/\/erc.europa.eu\/sites\/default\/files\/2023-12\/AI_in_science.pdf<\/a><\/p>\n<p>ExplanAItions: An AI Study by Wiley [online]. Wiley. 2025 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/www.wiley.com\/content\/dam\/wiley-dotcom\/en\/b2c\/content-fragments\/explanaitions-ai-report\/pdfs\/Wiley_ExplanAItions_AI_Study_February_2025.pdf\">https:\/\/www.wiley.com\/content\/dam\/wiley-dotcom\/en\/b2c\/content-fragments\/explanaitions-ai-report\/pdfs\/Wiley_ExplanAItions_AI_Study_February_2025.pdf<\/a><\/p>\n<p>GAO, S., <em>et al<\/em>. Empowering Biomedical Discovery with AI Agents. <em>Cell<\/em> [online]. 2024, vol. 187, no. 22, pp. 6125\u20136151, ISSN: 0092-8674 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1016\/j.cell.2024.09.022\">https:\/\/doi.org\/10.1016\/j.cell.2024.09.022<\/a>. Available from: <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0092867424010705\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0092867424010705<\/a><\/p>\n<p>GHANNAD, M., <em>et al<\/em>. A systematic review finds that spin or interpretation bias is abundant in evaluations of ovarian cancer biomarkers. <em>Journal of Clinical Epidemiology<\/em> [online]. 2019, vol.116, pp. 9\u201317, ISSN: 0895-4356 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1016\/j.jclinepi.2019.07.011\">https:\/\/doi.org\/10.1016\/j.jclinepi.2019.07.011<\/a>. Available from: <a href=\"https:\/\/www.jclinepi.com\/article\/S0895-4356(18)30952-1\/fulltext\">https:\/\/www.jclinepi.com\/article\/S0895-4356(18)30952-1\/fulltext<\/a><\/p>\n<p>GUAN, M.Y., <em>et al<\/em>. Deliberative Alignment: Reasoning Enables Safer Language Models [online]. OpenAI. 2024 [viewed 7 May 2025]. <a href=\"https:\/\/openai.com\/index\/deliberative-alignment\/\">https:\/\/openai.com\/index\/deliberative-alignment\/<\/a><\/p>\n<p>IBM IBV. Disruption by design: Evolving experiences in the age of generative AI [online]. IBM Institute for Business Value. 2024 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/www.ibm.com\/thought-leadership\/institute-business-value\/en-us\/report\/generative-ai-experience-design\">https:\/\/www.ibm.com\/thought-leadership\/institute-business-value\/en-us\/report\/generative-ai-experience-design<\/a><\/p>\n<p>IFARGAN, T., <em>et al<\/em>. Autonomous LLM-Driven Research \u2014 from Data to Human-Verifiable Research Papers. <em>NEJM AI<\/em> [online]. 2025, vol. 2, no. 1 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1056\/AIoa2400555\">https:\/\/doi.org\/10.1056\/AIoa2400555<\/a>. Available from: <a href=\"https:\/\/ai.nejm.org\/doi\/10.1056\/AIoa2400555\">https:\/\/ai.nejm.org\/doi\/10.1056\/AIoa2400555<\/a><\/p>\n<p>KHAMASSI, M., NAHON, M. and CHATILA, R. Strong and Weak Alignment of Large Language Models with Human Values. <em>Sci Rep<\/em> [online]. 2024, vol. 14, 19399 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1038\/s41598-024-70031-3\">https:\/\/doi.org\/10.1038\/s41598-024-70031-3<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/s41598-024-70031-3\">https:\/\/www.nature.com\/articles\/s41598-024-70031-3<\/a><\/p>\n<p>KISSINGER, H.A., SCHMIDT, E. and MUNDIE, C. Genesis: Artificial Intelligence, Hope, and the Human Spirit. New York: Little Brown and Company, 2024<\/p>\n<p>KRAKOWSKI, S. Human-AI Agency in the Age of Generative AI. <em>Information and Organization<\/em> [online]. 2025, vol. 35, no. 1, 100560, ISSN: 1471-7727 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1016\/j.infoandorg.2025.100560\">https:\/\/doi.org\/10.1016\/j.infoandorg.2025.100560<\/a>. Available from: <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1471772725000065?via%3Dihub\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1471772725000065?via%3Dihub<\/a><\/p>\n<p>McAFEE, A. Generally Faster: The Economic Impact of Generative AI [online]. The MIT Initiative on the Digital Economy (IDE). 2024 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/ide.mit.edu\/wp-content\/uploads\/2024\/04\/Davos-Report-Draft-XFN-Copy-01112024-Print-Version.pdf?x76181\">https:\/\/ide.mit.edu\/wp-content\/uploads\/2024\/04\/Davos-Report-Draft-XFN-Copy-01112024-Print-Version.pdf?x76181<\/a><\/p>\n<p>NADDAF, M. How are Researchers Using AI? Survey Reveals Pros and Cons for Science [online]. Nature. 2025 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1038\/d41586-025-00343-5\">https:\/\/doi.org\/10.1038\/d41586-025-00343-5<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-00343-5\">https:\/\/www.nature.com\/articles\/d41586-025-00343-5<\/a><\/p>\n<p>NOORDEN, R.V. E WEBB, R. ChatGPT and Science: the AI System was a Force in 2023\u2014for Good and Bad [online]. <em>Nature<\/em> [online]. 2023, vol. 624 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1038\/d41586-023-03930-6\">https:\/\/doi.org\/10.1038\/d41586-023-03930-6<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/d41586-023-03930-6\">https:\/\/www.nature.com\/articles\/d41586-023-03930-6<\/a><\/p>\n<p>OUYANG, L., <em>et al<\/em>. Training Language Models to Follow Instructions with Human Feedback. In: Advances in Neural Information Processing Systems 35 (NeurIPS 2022), New Orleans, 2022 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/hash\/b1efde53be364a73914f58805a001731-Abstract-Conference.html\">https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/hash\/b1efde53be364a73914f58805a001731-Abstract-Conference.html<\/a><\/p>\n<p>PEW RESEARCH CENTER. The Future of Human Agency [online]. Pew Research Center. 2023 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/www.pewresearch.org\/wp-content\/uploads\/sites\/20\/2023\/02\/PI_2023.02.24_The-Future-of-Human-Agency_FINAL.pdf\">https:\/\/www.pewresearch.org\/wp-content\/uploads\/sites\/20\/2023\/02\/PI_2023.02.24_The-Future-of-Human-Agency_FINAL.pdf<\/a><\/p>\n<p>Reproducibility and Transparency: What&#8217;s Going on and How Can We Help. <em>Nature Communications<\/em> [online] 2025, vol. 16 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/www.nature.com\/articles\/s41467-024-54614-2\">https:\/\/www.nature.com\/articles\/s41467-024-54614-2 <\/a><\/p>\n<p>RUSSEL, S. Human Compatible: Artificial Intelligence and the Problem of Control. Oxford: Oxford University Press, 2019 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/people.eecs.berkeley.edu\/~russell\/papers\/mi19book-hcai.pdf\">https:\/\/people.eecs.berkeley.edu\/~russell\/papers\/mi19book-hcai.pdf<\/a><\/p>\n<p>SANTOS, L.H.L. Sobre a integridade \u00e9tica da pesquisa. <em>Cienc. Cult.<\/em> [online]. 2017, vol.69, no.3, pp. 4\u20135, ISSN: 2317-6660 [viewed 7 May 2025]. <a href=\"http:\/\/doi.org\/10.21800\/2317-66602017000300002\">http:\/\/doi.org\/10.21800\/2317-66602017000300002<\/a>. Available from: <a href=\"http:\/\/cienciaecultura.bvs.br\/scielo.php?script=sci_arttext&amp;pid=S0009-67252017000300002\">http:\/\/cienciaecultura.bvs.br\/scielo.php?script=sci_arttext&amp;pid=S0009-67252017000300002<\/a><\/p>\n<p>SLEEK, S. On the Right Side of Being Wrong: the Emerging Culture of Research Transparency [online]. Association for Psychological Science. 2021 [viewed 7 May 2025]. Available from: <a href=\"https:\/\/www.psychologicalscience.org\/observer\/right-side-of-wrong\">https:\/\/www.psychologicalscience.org\/observer\/right-side-of-wrong<\/a><\/p>\n<p>Tools Such as Chatgpt Threaten Transparent Science; Here are Our Ground Rules for Their Use. <em>Nature<\/em> [online]. 2023, vol. 614 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1038\/d41586-023-00191-1\">https:\/\/doi.org\/10.1038\/d41586-023-00191-1<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/d41586-023-00191-1\">https:\/\/www.nature.com\/articles\/d41586-023-00191-1<\/a><\/p>\n<p>VASCONCELOS, S. and MARU\u0160I\u0106, A. Gen AI and Research Integrity: Where to now?: The Integration of Generative AI in the Research Process Challenges Well-Established Definitions of Research Integrity. <em>EMBO Reports<\/em> [online]. 2025, vol. 26, pp. 1923\u20131928 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1038\/s44319-025-00424-6\">https:\/\/doi.org\/10.1038\/s44319-025-00424-6<\/a>. Available from: <a href=\"https:\/\/www.embopress.org\/doi\/full\/10.1038\/s44319-025-00424-6\">https:\/\/www.embopress.org\/doi\/full\/10.1038\/s44319-025-00424-6<\/a><\/p>\n<p>WOODGETT, J. We Must Be Open About Our Mistakes. <em>Nature<\/em> [online.] 2023, vol. 489, pp. 7 [viewed 7 May 2025]. <a href=\"https:\/\/doi.org\/10.1038\/489007a\">https:\/\/doi.org\/10.1038\/489007a<\/a>. Available from: <a href=\"https:\/\/www.nature.com\/articles\/489007a\">https:\/\/www.nature.com\/articles\/489007a<\/a><\/p>\n<h3>External links<\/h3>\n<p><a href=\"https:\/\/openai.com\/index\/chatgpt\/\">ChatGPT<\/a><\/p>\n<p><a href=\"https:\/\/copilot.microsoft.com\/\">Copilot <\/a><\/p>\n<p><a href=\"https:\/\/chat.deepseek.com\/sign_in\">DeepSeek-R1 <\/a><\/p>\n<p><a href=\"https:\/\/gemini.google.com\/\">Gemini <\/a><\/p>\n<p><a href=\"https:\/\/github.com\/features\/copilot\">GitHub Copilot<\/a><\/p>\n<p><a href=\"https:\/\/www.llama.com\/\">Llama 3 <\/a><\/p>\n<p><a href=\"https:\/\/pi.ai\/talk\">Pi <\/a><\/p>\n<p>&nbsp;<\/p>\n<h3>About Sonia Vasconcelos<\/h3>\n<p>Sonia Vasconcelos is an Associate Professor in the Science Education Program (<em>Educa\u00e7\u00e3o, Gest\u00e3o e Difus\u00e3o em Bioci\u00eancias<\/em>) at the Institute of Medical Biochemistry Leopoldo de Meis (IBqM), Federal University of Rio de Janeiro (UFRJ). She leads the Laboratory for Research Ethics, Science Communication, and Society (LECCS) at IBqM and chairs UFRJ\u2019s Advisory Council for Research Ethics (CTEP). Prof. Vasconcelos serves as an academic editor for\u00a0<em>PLOS ONE<\/em>\u00a0and is a member of the Editorial Board of\u00a0<em>Research Integrity and Peer Review<\/em>. Her research and publications focus on science communication, the ethics and regulation of scientific research, research integrity and science policy.<\/p>\n<p>&nbsp;<\/p>\n<h3>About Ana Maru\u0161i\u0107<\/h3>\n<p>Ana Maru\u0161i\u0107 is Professor of Anatomy, Chair of the Department of Research in Biomedicine and Health and Head of the Centre for Evidence-based Medicine at the University of Split School of Medicine, Split, Croatia. She is the Co-Editor in Chief of the ST-OPEN journal and editor emerita of the Journal of Global Health. Prof. Maru\u0161i\u0107 is on the Advisory Board of the EQUATOR Network and also serves on the Council of the Committee for Publication Ethics (COPE). She has more than 400 peer-reviewed articles and was heavily involved with creating the policy of mandatory registration of clinical trials in public registries which helped change the legal regulation of clinical trials worldwide.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Scholarly communication is amid a reconfiguration that, from a conservative perspective, should be as paradigm-shifting as the creation of the first scientific journal <em>Philosophical Transactions<\/em>, in 1665. From a more disruptive perspective, this transformation will reshape the entire scientific culture, redefining the autonomy of researchers and institutions in producing and validating knowledge. Sustaining research integrity and rigor in projects and publications calls for strategies extending beyond transparency policies for researchers using Generative Artificial Intelligence. <span class=\"ellipsis\">&hellip;<\/span> <span class=\"more-link-wrap\"><a href=\"https:\/\/blog.scielo.org\/en\/2025\/05\/07\/research-integrity-and-human-agency-in-research-gen-ai\/\" class=\"more-link\"><span>Read More &rarr;<\/span><\/a><\/span><\/p>\n","protected":false},"author":127,"featured_media":5614,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","_links_to":"","_links_to_target":""},"categories":[3],"tags":[84,37,7],"class_list":["post-5612","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-analysis","tag-artificial-intelligence","tag-ethics-in-scholarly-communication","tag-scholarly-communication"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/posts\/5612","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/users\/127"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/comments?post=5612"}],"version-history":[{"count":6,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/posts\/5612\/revisions"}],"predecessor-version":[{"id":5616,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/posts\/5612\/revisions\/5616"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/media\/5614"}],"wp:attachment":[{"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/media?parent=5612"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/categories?post=5612"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.scielo.org\/en\/wp-json\/wp\/v2\/tags?post=5612"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}