AI chatbots and the simulation of dialog: what does Bakhtinian theory have to say?

By Ives da Silva Duque-Pereira and Sérgio Arruda de Moura

My “friend”, the Robot

We live in times when talking to a machine has become routine in many people’s lives, with a series of implications aggravated by the trend towards anthropomorphizing tools. Whether it is asking for help to solve a problem, advice from a friend, writing a text or clearing up academic doubts, Generative Artificial Intelligence (GenAI) chatbots are already among us — often answering us with impressive fluency. But is talking to a chatbot the same as dialog? As an elementary school teacher and researcher, I wonder what is actually happening in the interactions between students and GenAI chatbots?

This question led us to look at these artificial intelligence (AI) systems through the lens of language. More precisely, the philosophy of language of Mikhail Bakhtin, a Russian thinker who dedicated much of his work to understanding how meaning is constructed in the relationship between the voices of a discourse.

Many voices and no agency?

For Bakhtin, all discourse is, by nature, a dialog. Even when we write a text on our own, we are responding to something — to other ideas, to other authors, to a situation. It is as if each speech is part of a great river of utterances, flowing and intertwining with what has already been said and what is yet to come.

But what about when that speech comes from a chatbot? Apparently, there is an answer there, an interaction. Language flows, styles vary, vocabulary adapts. But on closer inspection, something breaks down. There is no consciousness, no apparent human intentionality or life story underpinning that response. What we see is a performance — a dialog simulation produced by an algorithm.

And it is precisely this that made us think about what we call algorithmic monologism. A term that helps us describe what happens when an AI simulates many voices, but they all pass through the same algorithmic filter, trained to avoid conflicts, balance the sides and deliver a “safe” speech that responds to what is expected. Although they appear to engage in dialogue, these AI systems operate under a centralizing logic, which simulates a multiplicity of voices, but unifies them into standardized, conciliatory responses devoid of real conflict.

While Bakhtin saw dialog as an encounter between concrete social consciousnesses, each with their own place in the world, chatbots merely stage this otherness. In the end, we have a conversation in appearance, but in essence it is organized by a unifying center that silences actual differences.

Chatbots such as ChatGPT are able to considerably vary their styles — they use slang, technical terms, emojis, citations — and this may even recall the Bakhtinian concept of heteroglossia, that mixture of social languages that make up discourse. However, we need to go beyond the surface and ask ourselves whether these voices are there by conscious choice or are just reproductions pasted together, without agency, conflict, or tension.

In practice, what we often have is a kind of controlled polyphony. The chatbot presents different points of view, but all reconciled within the same response. No voice interrupts, provokes or destabilizes. Everything ends in harmony. But true dialog, as Bakhtin reminded us, is born precisely from the clash, the encounter with the other in their difference.

What does this have to do with science and education?

Everything. If language is the soil of knowledge construction, we need to ask ourselves what kind of discourse we are feeding when we use these tools in education, research and knowledge production. A chatbot that delivers “neutral” answers may seem efficient, but it can also accustom us to a logic of single answers, of flat truths, of consensus that has never been debated.

This is where critical literacy in AI becomes urgent. It is not enough to know how to use AI to “save time”, “increase productivity”, “assist” or “automate” tasks. We need to be able to critically read the answers it gives us and ask ourselves Where these ideas come from? Who is being cited? What is being left out? What values are embedded in this way of responding?

The model we propose qualitatively evaluates answers given by a chatbot (GPT-4.5) using Bakhtin’s concepts. As an example, we analyzed answers to questions on school topics — from “how to deal with indiscipline” to “what was the industrial revolutions”. We analyzed the structure, the language styles, the implicit values, and the type of interaction promoted.

We noticed that even when the chatbot simulates different points of view, its responses tend to be conciliatory, consensual and conflict-free. This generates what we call a simulated dialog, because there is no real confrontation of ideas, just an apparently dialogical surface, with polyphonic simulation.

One of the most worrying effects is the homogenization of discourse. When a chatbot responds by always trying to please, avoiding polemics and presenting all sides “equally”, it prevents us from seeing the conflict, the inequality, the dispute of narratives that make up the real world. Instead of provoking thought, he wraps it up in a polite response. And this can be dangerous, especially at school and university, spaces that are supposed to foster critical thinking.

Chatbots tend to reinforce dominant voices and exclude peripheral knowledge, radical criticism and alternative views. In doing so, they create the impression that there is only one way of thinking, a “natural” consensus — when, in fact, science and education advance precisely through the conflict of ideas, the diversity of perspectives.

AI, discourse and science: who speaks behind the machine?

More than mastering tools, we need to recognize the limits of what they can (or cannot) tell us. We need methodologies that analyze these discourses in depth, that understand that behind every apparently neutral response, there are decisions, ideologies, patterns and omissions.

We believe that we increasingly need critical analytical methodologies to investigate the impacts of AI on language, education and science. The model of analysis we propose is far from definitive, but it is a contribution that we hope will be received in an attempt to offer a methodological path that does not romanticize technology but does not demonize it either. It seeks to understand AI discourse as a social, ideological and situated phenomenon — and therefore one that can be analyzed with the tools of the human sciences.

AI can be an ally — as long as we treat it with the same seriousness and criticality as any other source of discourse. After all, it is not because an answer comes from the machine that it is outside politics, ideology or social context.

By linking Bakhtin’s theory to the analysis of systems like ChatGPT, we argue that there is no discourse without ideology. Even if a chatbot seems neutral, its responses are shaped by previous human selections — of data, parameters, filters. Thus, the need arises to ask who is speaking through AI? Which voices does it amplify? Which ones does it silence?

For the field of scholarly communication, this analysis opens up ways to think critically about the use of algorithmic tools in editorial processes, writing practices, manuscript evaluation and knowledge production. AI can be an ally, but only if it is understood within the ideological, discursive, and political fabric that constitutes it.

To read the preprint, access

DUQUE-PEREIRA, I.S. and MOURA, S.A. Monologismo Algorítmico e Dialogismo Simulado: uma análise bakhtiniana do discurso mediado por chatbots de IA. SciELO Preprints [online]. 2025 [viewed 30 April 2025]. https://doi.org/10.1590/SciELOPreprints.11590. Available from: https://preprints.scielo.org/index.php/scielo/preprint/view/11590

 

About Ives da Silva Duque PereiraPhotograph of Ives da Silva Duque Pereira

Ives da Silva Duque Pereira is a doctoral student in the Graduate Program in Cognition and Language at the Universidade Estadual do Norte Fluminense Darcy Ribeiro (UENF). He is a professor with the Rio de Janeiro State Department of Education (SEEDUC-RJ) and the Technical School Support Foundation (Fundação de Apoio à Escola Técnica – FAETEC).

 

About Sérgio Arruda de MouraPhotograph of Sérgio Arruda de Moura

Sérgio Arruda de Moura is a professor in the Graduate Program in Cognition and Language at the Universidade Estadual do Norte Fluminense Darcy Ribeiro (UENF).

 

Translated from the original in Portuguese by Lilian Nassi-Calò.

 

Como citar este post [ISO 690/2010]:

DUQUE-PEREIRA, I.S. and MOURA, S.A. AI chatbots and the simulation of dialog: what does Bakhtinian theory have to say? [online]. SciELO in Perspective, 2025 [viewed ]. Available from: https://blog.scielo.org/en/2025/04/30/ai-chatbots-and-the-simulation-of-dialog-what-does-bakhtinian-theory-have-to-say/

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation