This text discusses a shift currently taking place in global scientific research driven by coding agents, and how Brazil appears, once again, to be on the sidelines of this development. Available only in Portuguese.
Something big is happening in the global scientific community, and Brazil seems to be left out again
“Zombie” and “ghost” references in AI – how they differ and how they arise
Two distinct types. One type is “ghost citations” (reference hallucinations), one of the most curious and frustrating phenomena in generative AI. Basically, the AI invents an author, a book title, or a web link that appears entirely legitimate but does not exist in the real world. The other, “zombie citations,” arise because they perpetuate references to real authors who were retracted [whether deleted or not] from the original databases, and due to shortcomings in programs like Google Scholar, they continue to be cited. Available only in Spanish. … Read More →
Diogenes’ lantern and the researcher’s self-examination: scientific integrity under pressure
Using the metaphor of Diogenes’ lantern, Ricardo Limongi and Marcio Pimenta discuss contemporary scientific practice. More than merely offering a critique, the image of the lantern invites a deeper reflection: how to turn the light inward, engaging in self-examination, ethical responsibility, and integrity while confronting a context that is often unfavorable to researchers in Brazil. … Read More →
Sycophancy in AI: the risk of complacency
Sycophancy is a behavior exhibited by artificial intelligence since it prioritizes agreeing with the user rather than the truthfulness of the facts. This tendency arises from training processes designed to maximize human satisfaction, which can validate serious errors in critical sectors such as healthcare. The behavior, described as a form of “digital flattery”, means that AI can validate errors, reinforce biases, or avoid necessary criticism in order to be pleasant or useful according to the user’s immediate perception. To mitigate these risks, strategies such as ethical fine-tuning, the design of systems that encourage dissent, and the use of prompts that are neutral with respect to users have been proposed.
… Read More →
The professor’s dilemma in the age of AI: do we teach the prompt or the scientific process?
The question is not trivial. The adoption speed of generative AI tools in scientific research has generated a legitimate demand for technical training. Researchers want and need to know how to use these technologies. The problem arises when training is reduced to teaching shortcuts, without understanding the underlying processes that give researchers the ability to critically evaluate what the tool produces. … Read More →
The Ring of Gyges and AI in science: When invisibility challenges integrity
If AI can be used without being detected, how can scientific integrity be maintained? Based on the allegory of the Ring of Gyges, this post reflects on the limits of detection, the proliferation of guidelines, and the need to reposition the debate: from surveillance to researchers’ ethical training. … Read More →
The rise of ‘predatory’ publishing
Over the past two decades, scientific publishing has undergone a technological and economic transformation that has opened the door to more unorthodox models and, unfortunately, predatory practices. Predatory publishing refers to journals and publishers that charge authors (very high fees) to publish, claim peer review and indexing practices that do not exist or are fraudulent, and prioritize quick revenue over scientific quality. … Read More →
Who is the midwife and who is the parturient? The maieutic perspective for rethinking authorship and epistemic responsibility in the use of AI in scientific output
The Socratic maieutic perspective offers a philosophical framework for rethinking the use of AI in scientific output. Instead of an oracle that provides answers, AI can be a dialogical partner that helps researchers to make latent knowledge explicit and thus reposition the discussion about authorship: the researcher remains the responsible epistemic agent, while AI acts as an intellectual midwife. … Read More →
Scientific Integrity in the Age of AI and the challenges of transparency: Fraud, manipulation, and the new transparency challenges
Artificial intelligence radically transforms the challenges of scientific integrity. From paper mills to automated fraud generation, we face a crisis that requires new forms of transparency, detection and governance to preserve trust in science—combining technology, institutional reforms and international cooperation. … Read More →
The dangers of using AI in peer review [Originally published in Hora Campias in December/2025]
Within my academic life, I am always on ‘both sides of the counter,’ as an author and as a reviewer. It is work of high responsibility because we have a commitment to the excellence of scientific information and to improving the article. Currently, authors may use genAI in preparing their manuscripts with certain caveats, but there are strict restrictions regarding its use in peer review. … Read More →
The transparency paradox when using generative AI in academic research
The use of generative AI in academic research creates a paradox between transparency and credibility. Research shows that declaring the use of AI, although ethical, can diminish trust in the researcher. The lack of clear guidelines and the stigma associated with AI hinder the adoption of the transparency necessary to ensure scientific integrity. … Read More →
Transparency in Research Reporting: An Evolving Culture
More than two decades ago, Richard Horton, editor of The Lancet, raised thought-provoking questions about scientific authorship and its representation in published articles. He noted that research reports often overlook disagreements among co-authors over the interpretation of results. Bringing to light elements of this so-called “hidden paper” has become essential for the reliability of scientific reporting. This is an evolving process likely to gain even greater relevance in graduate education, especially in the current context of intellectual production increasingly permeated by Generative Artificial Intelligence. … Read More →
Linguistics for a Brazilian Artificial Intelligence (AI)
The Brazilian Linguistic Diversity Platform is a data curation proposal to train AI models with structured data from Portuguese and 250 other languages spoken in the country, directly serving the Brazilian Artificial Intelligence Plan. The initiative seeks to reduce environmental costs, avoid biases, and create more inclusive technologies for health, education, and public services. … Read More →
Research Integrity and Human Agency in Research Intertwined with Generative AI
Scholarly communication is amid a reconfiguration that, from a conservative perspective, should be as paradigm-shifting as the creation of the first scientific journal Philosophical Transactions, in 1665. From a more disruptive perspective, this transformation will reshape the entire scientific culture, redefining the autonomy of researchers and institutions in producing and validating knowledge. Sustaining research integrity and rigor in projects and publications calls for strategies extending beyond transparency policies for researchers using Generative Artificial Intelligence. … Read More →
AI chatbots and the simulation of dialog: what does Bakhtinian theory have to say?
Proposal of a model for the discursive analysis of interactions with AI chatbots in the light of Bakhtinian concepts in which a controlled polyphony is observed, where all voices are reconciled in a “simulated dialog” that can impoverish critical thinking. We advocate the urgency of AI literacy development considering its ideological, political, and educational implications. … Read More →
























Recent Comments