Tag: Artificial Intelligence

Large Language Publishing [Originally published in the Upstream blog in January/2024]

Superimposed photograph of several books with the pages folded into an airplane shape on an infinite black background.

The New York Times ushered in the New Year with a lawsuit against OpenAI and Microsoft. OpenAI and its Microsoft patron had, according to the filing, stolen “millions of The Times’ copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides,” and more—all to train OpenAI’s LLMs. Read More →

Does Artificial Intelligence have hallucinations?

Neural net completion for "artificial intelligence", as done by DALL-E mini.

AI applications have demonstrated impressive capabilities, including the generation of very fluent and convincing responses. However, LLMs, chatbots, and the like, are known for their ability to generate non-objective or nonsensical statements, more commonly known as “hallucinations.” Could it be that they are on drugs? Available in Spanish only. Read More →

Can AI do reliable review scientific articles?

Image of two overlapping screens with words on a purple background generated by Google DeepMind

The cost of reviewing scientific publications, both in terms of money and time spent, is growing to unmanageable proportions with current methods. It is necessary to use AI as a trust system and thus free up human resources for research tasks. It would be important for SciELO to progressively incorporate AI modules for evaluation in its preprints server as a new advance and development of the technologies it manages. Available in Spanish only. Read More →

Research and scholarly communication, AI, and the upcoming legislation

AI risk pyramid: at the bottom of the pyramid is minimal risk, above is limited risk, followed by high risk and at the top of the pyramid is unacceptable risk.

Can AI be used to generate terrorist “papers”, spread deadly viruses, or learn how to make nuclear bombs at home? Is there legislation that can protect us? It looks like international regulation is on the way. Read More →

AI: How to detect chatbox texts and their plagiarism

Plagiarism diagram. The diagram consists of a drawing of three sheets of paper with text, one next to the other, followed below by a red arrow pointing down to a sheet of paper with text on which some passages are highlighted in red.

The ChatGPT-3 application is consulted on four topics under discussion for the production of academic texts acceptable to scientific journal editors. Each question is followed by the answer given by the OpenAI application itself and then by our evaluation, consulting recent sources published on the Internet. Finally, some (human) reflections are presented which, like all things, are subject to discussion or changes brought about by advances in technology. Read More →

ChatGPT and other AIs will transform all scientific research: initial reflections on uses and consequences – part 2

Image of an orange and pink coral-like formation, generated by Google DeepMind

In this second part of the essay, we seek to present some risks that arise particularly in the use of generative AI in the scientific field and in academic work. Although all the problems have not been fully mapped out, we have tried to offer initial reflections to support and encourage debate. Read More →

ChatGPT and other AIs will transform all scientific research: initial thoughts on uses and consequences – part 1

Image of a silhouette of a human head made up of colored threads generated by Google DeepMind

We discuss some possible consequences, risks, and paradoxes of the use of AIs in research, such as potential deterioration of research integrity, possible changes in the dynamics of knowledge production and center-periphery relations in the academic environment. We concluded by calling for an in-depth dialog on regulation and the creation of technologies adapted to our needs. Read More →

Artificial Intelligence and research communication

Watercolor of Alan Turing generated by Midjourney AI

Are chatbots really authors of scientific articles? Can they be legally responsible, make ethical decisions? What do scientific societies, journal editors and universities say? Can their results be included in original scientific articles? Based on recent contributions hereby presented, we’ll be publishing posts that will try to answer these questions and any new ones that arise. Read More →

GPT, machine translation, and how good they are: a comprehensive evaluation

Schematic showing the direct translation and transfer translation pyramid.

Generative artificial intelligence models have demonstrated remarkable capabilities for natural language generation, but their performance for machine translation has not been thoroughly investigated. A comprehensive evaluation of GPT models for translation is presented, compared to state-of-the-art commercial and research systems, including NMT, tested with texts in 18 languages. Read More →

It takes a body to understand the world – why ChatGPT and other language AIs don’t know what they’re saying [Originally published in The Conversation in April/2023]

Photograph of a white and silver robot holding a tablet in front of a luggage store. In the background, in the hallway, two people are walking with their backs to the camera.

Large language models can’t understand language the way humans do because they can’t perceive and make sense of the world. Read More →