The Ring of Gyges and AI in science: When invisibility challenges integrity

By Ricardo Limongi França Coelho

In Book II of The Republic, Plato presents the allegory of the Ring of Gyges: a shepherd who, upon finding a magic ring capable of making him invisible, uses this power to seduce the queen, murder the king, and take the throne. The story is not just a tale of moral corruption; it is a profound philosophical provocation. If anyone could act with the absolute guarantee that their actions would never be discovered, would they continue to be fair? Or does human morality depend fundamentally on fear of consequences and the gaze of others?
This question, asked more than two millennia ago, resurfaces in the context of contemporary scientific output. Generative artificial intelligence (GenAI) offers researchers a sort of invisibility cloak: tools capable of producing texts, analyses, and even scientific images in a virtually undetectable manner. Faced with this reality, the scientific community is confronted with the same question that Socrates had to answer to Glaucon: if detection is flawed and the consequences uncertain, what sustains scientific integrity?

The illusion of detection

The first intuitive response to the issue would be to develop increasingly sophisticated surveillance mechanisms, detectors capable of identifying AI (Artificial Intelligence)-generated texts and thus punishing offenders. This approach, however, faces severe empirical limitations.
The study by Liang et al. (2023), GPT detectors are biased against non-native English writers 1 ,published in the journal Patterns, demonstrated that AI detectors exhibit systematic bias against non-native English writers: 97.8% of TOEFL essays were incorrectly flagged as AI-generated, with an average false positive rate of 61.3%. The systematic review by Weber-Wulff et al. (2023), Testing of detection tools for AI-generated text 2, evaluated 12 detection tools and concluded that the available options “are neither accurate nor reliable”, with half of AI-generated texts managing to evade detection after manual editing. OpenAI itself discontinued its classifier in July 2023 due to low accuracy, and Vanderbilt University disabled Turnitin’s AI detection, citing concerns about false positives.
These data reveal a basic asymmetry: while the ability to generate text by AI advances exponentially, the ability to detect it remains structurally limited. Gyges’ ring, in practice, works.

The Brazilian scenario: an institutional vacuum

If detection tools are insufficient, institutions would be left to establish clear guidelines to advise researchers on the acceptable use of AI. Internationally, there is convergence. In 2024, COPE (Committee on Publication Ethics) established that AI tools “cannot meet the requirements of authorship, as they cannot take responsibility for the work submitted”3. Nature has prohibited LLMs as authors since January 20234 and requires documentation in the methods section. The journal Science classifies violations as “scientific misconduct, no different from altered images or plagiarism”.
In Brazil, however, a worrying gap remains. To date, CAPES and CNPq have not published formal policies on the use of AI in scientific output. In April 2025, CAPES released a document entitled “Artificial intelligence in research and funding”, but explicitly classified it as a “Text for Discussion”, not as a binding institutional policy.
SciELO stands out as a virtuous exception in the national context, having published in September 2023 the “Guide to the Use of Artificial Intelligence Tools and Resources in Research Communication on the SciELO Network5 which establishes clear principles: mandatory declaration of use, content verification for plagiarism, prohibition of AI as an author, and prohibition of the use of AI in peer review.
To partially fill this void, Brazilian researchers launched in 2024 the “Diretrizes para o uso ético e responsável da Inteligência Artificial Generativa: um guia prático para pesquisadores6,,published by Editora Intercom. The document, authored by Sampaio (UFPR) Sabbatini (UFPE) and Limongi (UFG), criticizes the dependence on international company policies and proposes the centrality of human agency in research processes.

The paradox of transparency

Even where guidelines do exist, their effectiveness faces obstacles. The so-called “transparency paradox7, documented by Sampaio (2025) in the SciELO in Perspective Blog, reveals a tension: although declaring the use of AI is ethically correct, doing so can diminish the trust placed upon the researcher. Studies indicate that only 5.7% of authors voluntarily disclose the use of AI, a rate significantly lower than that observed in anonymous surveys on actual practices. Paradoxically, the requirement for transparency creates disincentives to transparency itself.
This paradox highlights that the problem cannot be solved through external regulations alone. When detection is flawed, guidelines are inconsistent, and transparency is discouraged, what remains?

From surveillance to training: a Socratic response

Socrates’ response to Glaucon’s provocation was to argue that justice has intrinsic value, that being just is worthwhile in itself, regardless of external rewards or punishments. A truly just person would continue to act correctly even with Gyges’ ring, because integrity is constitutive of a well-lived life.
Transposed to the scientific context, this perspective suggests that research integrity cannot depend primarily on mechanisms of surveillance and punishment. It needs to be cultivated as a value in itself, a constitutive part of what it means to be a researcher.
Recent literature points in this direction. Das Deep et al. (2025), in Evaluating the Effectiveness and Ethical Implications of AI Detection Tools in Higher Education8,in a qualitative synthesis published in the journal Information, document that overreliance on detection creates “a culture of surveillance that prioritizes policing over teaching”. The EDUCAUSE AI Literacy framework (2024) proposes competencies organized into four areas: Technical Understanding, Evaluative Skills, Practical Application, and Ethical Considerations. The central argument is that a sustainable response involves the formation and redesign of evaluative practices, rather than the intensification of technological surveillance.
The independent Brazilian guidelines by Sampaio, Sabbatini, and Limongi (2024) echoes this perspective by proposing essential competencies for researchers: understanding AI tools and their limitations, maintaining human authorship as a central element, transparency in statements of use, critical evaluation of outputs, recognition of biases and errors, and preservation of human agency in analysis.

Concluding remarks

The allegory of the Ring of Gyges reminds us that the basic issue is not technological, but ethical. Detection tools will continue to be developed and circumvented. Guidelines will continue to be published and proved inconsistent across institutions. What remains is the question: what kind of researcher do we wish to train?
The answer lies in three complementary paths. First, transparency as a value, not as a bureaucratic obligation, which requires addressing the paradox of transparency with policies that do not penalize honesty. Second, robust ethical training that cultivates integrity as a constitutive part of the researcher’s identity, not as a response to the fear of being exposed. Third, a review of evaluation metrics that pressure for quantity over quality and impact, creating perverse incentives for the irresponsible use of AI.
Plato argued that Gyges, by using the ring for corrupt purposes, did not become freer; he became a slave to his appetites, alienated from his own humanity. Similarly, researchers who use AI to simulate scientific output are not optimizing their careers; they are emptying their activity of any meaning.
The invisibility conferred by generative AI is, in the end, an opportunity for questioning: do we do science because we fear being discovered, or because the pursuit of knowledge is constitutive of who we are? The answer to this question will determine not only institutional policies, but also the future of scholarly communication itself.

 

Posts of the series

Notes

1. LIANG, W., et al. GPT detectors are biased against non-native English writers. Patterns [online]. 2023, vol. 4, no. 7, art. 100779, ISSN: 2666-3899 [viewed 04 February 2026]. https://doi.org/10.1016/j.patter.2023.100779. Available from: https://www.cell.com/patterns/fulltext/S2666-3899(23)00130-7?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2666389923001307%3Fshowall%3Dtrue

2.WEBER-WULFF, D., et al. Testing of detection tools for AI-generated text. International Journal for Educational Integrity [online]. 2023, vol. 19, art. 26, ISSN: 1833-2595. https://doi.org/10.1007/s40979-023-00146-z. Available from: https://link.springer.com/article/10.1007/s40979-023-00146-z

3.Authorship and AI tools [online]. Committee on Publication Ethics (COPE). 2024 [viewed 04 February 2026]. Available from: https://doi.org/10.24318/cCVRZBms

4. Artificial Intelligence (AI) – Editorial Policies [online]. Nature Portfolio. 2023 [viewed 04 February 2026]. Available from: https://www.nature.com/nature-portfolio/editorial-policies/ai

5.Guide to the Use of Artificial Intelligence Tools and Resources in Research Communication on SciELO [online]. SciELO – Scientific Electronic Library Online, 2023 [viewed 04 February 2026]. Available from: https://www.scielo.org/en/about-scielo/methodologies-and-technologies/guide-to-the-use-of-artificial-intelligence-tools-and-resources-in-research-communication-on-the-scielo-network/

6. SAMPAIO, R.C., SABBATINI, M. e LIMONGI, R. Diretrizes para o uso ético e responsável da Inteligência Artificial Generativa: um guia prático para pesquisadores. São Paulo: Editora Intercom, 2024. Disponível em: https://www.portcom.intercom.org.br/ebooks/detalheEbook.php?id=57203

7. SAMPAIO, R. The transparency paradox when using generative AI in academic research [online]. SciELO em Perspectiva, 2026 [viewed 04 February 2025]. Available from: https://blog.scielo.org/en/2025/10/10/the-transparency-paradox-when-using-generative-ai-in-academic-research/

8. DAS DEEP, S., et al. Evaluating the Effectiveness and Ethical Implications of AI Detection Tools in Higher Education. Information [online]. 2025, vol. 16, no. 10, art. 905, ISSN: 2078-2489 [viewed 04 February 2026]. https://doi.org/10.3390/info16100905. Available from: https://www.mdpi.com/2078-2489/16/10/905

References

Artificial Intelligence (AI) – Editorial Policies [online]. Nature Portfolio. 2023 [viewed 05 February 2026]. Available from: https://www.nature.com/nature-portfolio/editorial-policies/ai

Authorship and AI tools [online]. Committee on Publication Ethics (COPE). 2024 [viewed 05 February 2026] https://doi.org/10.24318/cCVRZBms . Available from: https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools 

DAS DEEP, S., et al. Evaluating the Effectiveness and Ethical Implications of AI Detection Tools in Higher Education. Information [online]. 2025, vol. 16, no. 10, art. 905, ISSN: 2078-2489 [viewed 05 February 2026]. https://doi.org/10.3390/info16100905. Available from: https://www.mdpi.com/2078-2489/16/10/905

Guide to the Use of Artificial Intelligence Tools and Resources in Research Communication on SciELO [online]. SciELO – Scientific Electronic Library Online, 2023 [viewed 05 February 2026]. Available from: https://www.scielo.org/en/about-scielo/methodologies-and-technologies/guide-to-the-use-of-artificial-intelligence-tools-and-resources-in-research-communication-on-the-scielo-network/

LIANG, W., et al. GPT detectors are biased against non-native English writers. Patterns [online]. 2023, vol. 4, no. 7, art. 100779, ISSN: 2666-3899 [viewed 05 February 2026]. https://doi.org/10.1016/j.patter.2023.100779. Available from: https://www.cell.com/patterns/fulltext/S2666-3899(23)00130-7?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2666389923001307%3Fshowall%3Dtrue

SAMPAIO, R.C., SABBATINI, M. e LIMONGI, R. Diretrizes para o uso ético e responsável da Inteligência Artificial Generativa: um guia prático para pesquisadores. São Paulo: Editora Intercom, 2024. Disponível em: https://www.portcom.intercom.org.br/ebooks/detalheEbook.php?id=57203

SAMPAIO, R.; SABBATINI, M.; LIMONGI, R. Pesquisadores lançam diretrizes IAG [online]. SciELO em Perspectiva, 2025 [viewed 05 February 2026]. Available from: https://blog.scielo.org/blog/2025/02/05/pesquisadores-lancam-diretrizes-iag/

SAMPAIO, R. The transparency paradox when using generative AI in academic research [online]. SciELO em Perspectiva, 2026 [viewed 05 February 2025]. Available from: https://blog.scielo.org/en/2025/10/10/the-transparency-paradox-when-using-generative-ai-in-academic-research/

THORP, H. H. ChatGPT is fun, but not an author. Science [online]. 2023, vol. 379, no. 6630, pp. 313-313, ISSN: 0036-8075 [viewed 05 February 2026]. https://doi.org/10.1126/science.adg7879. Available from: https://www.science.org/doi/10.1126/science.adg7879

WEBER-WULFF, D.; ANOHINA-NAUMECA, A.; BJELObaba, S.; FOLTÝNEK, T.; GUERRERO-DIB, J.; POPOOLA, O.; ŠIGUT, P.; WADDINGTON, L. Testing of detection tools for AI-generated text. International Journal for Educational Integrity [online]. 2023, vol. 19, no. 26, pp. 1-39, ISSN: 1833-2595 [viewed 05 February 2026]. https://doi.org/10.1007/s40979-023-00146-z. Available from: https://link.springer.com/article/10.1007/s40979-023-00146-z

 

About Ricardo Limongi França Coelho

Photograph of Ricardo Limongi França Coelho

Professor of Marketing and Artificial Intelligence, Universidade Federal de Goiás (UFG), Goiânia–GO,and Editor-in-Chief of Brazilian Administration Review (BAR) da ANPAD journal, DT-CNPq Scholarship Recipient.

 

 

Translated from the original in Portuguese by Lilian Nassi-Calò.

 

Como citar este post [ISO 690/2010]:

COELHO, R.L.F. The Ring of Gyges and AI in science: When invisibility challenges integrity [online]. SciELO in Perspective, 2026 [viewed ]. Available from: https://blog.scielo.org/en/2026/02/06/the-ring-of-gyges-and-ai-in-science-when-invisibility-challenges-integrity/

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation