Tag: Artificial Intelligence

Where to start with AI in research management [Originally published in the LSE Impact blog in December/2024]

Image generated by Google DeepMind. The image has a purple background and you can read “How do large language models work?” with a brief description below.

Generative AI is having a transformative effect on academic work, but it is also reshaping the professional services and research management sectors that support it. Here Anna Aston discusses where AI can be useful for research management and the tools research managers can use in different areas of their work. Read More →

An Analysis of the Epistemological Foundations of Machine Learning

Photograph of a robotic hand and a human hand reaching towards each other against a plain background, nearly touching fingertips.

Outlined here is a critical review of the logical-epistemic foundations of machine learning, focusing on the limitation of AI systems’ autonomy in generating knowledge. It contrasts this possibility with the theoretical constraints posed by Chaitin’s incompleteness theorem, which argues that AI cannot surpass human cognitive capacity. Read More →

How to translate academic writing into podcasts using generative AI [Originally published in the LSE Impact blog in June/2024]

Image of a work of art made up of several lilac letters in a formation that looks like a cloud, generated by Google DeepMind

One of the benefits of generative AI is the ability to transform one media from text, to speech, to imagery to video. In this post Andy Tattersall explores one aspect of this ability, by transforming his archive written blogposts into a podcast format, Talking Threads, and discusses why and how this could be beneficial for research communication. Read More →

Web platform can revolutionize the essay correction process

Promotional image of CRIA (artificial intelligence essay grader), showing the tool's logo, details about its features, and social media contacts, all on a purple background.

In search of an alternative to the laborious process of correcting essays, more specifically regarding identifying deviations from the theme in essays, researchers have developed a text feedback platform that simulates the National High School Examination (Exame Nacional do Ensino Médio, ENEM) guidelines and grades, the Corrector of Essays by Artificial Intelligence (Corretor de Redações por Inteligência Artificial, CRIA). The tool is already being used by students and education professionals. Read More →

AI agents, bots and academic GPTs

Image of a work of art made up of several colored pieces in geometric formation, generated by Google DeepMind

Bots and academic GPTs are based on large language models, such as ChatGPT, designed and sometimes trained for more specific tasks. The idea is that by being specialized, they will deliver better results than “generic” models. This post presents some of the bots and academic GPTs. Available in Portuguese only. Read More →

Large Language Publishing [Originally published in the Upstream blog in January/2024]

Superimposed photograph of several books with the pages folded into an airplane shape on an infinite black background.

The New York Times ushered in the New Year with a lawsuit against OpenAI and Microsoft. OpenAI and its Microsoft patron had, according to the filing, stolen “millions of The Times’ copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides,” and more—all to train OpenAI’s LLMs. Read More →

Does Artificial Intelligence have hallucinations?

Neural net completion for "artificial intelligence", as done by DALL-E mini.

AI applications have demonstrated impressive capabilities, including the generation of very fluent and convincing responses. However, LLMs, chatbots, and the like, are known for their ability to generate non-objective or nonsensical statements, more commonly known as “hallucinations.” Could it be that they are on drugs? Available in Spanish only. Read More →

Can AI do reliable review scientific articles?

Image of two overlapping screens with words on a purple background generated by Google DeepMind

The cost of reviewing scientific publications, both in terms of money and time spent, is growing to unmanageable proportions with current methods. It is necessary to use AI as a trust system and thus free up human resources for research tasks. It would be important for SciELO to progressively incorporate AI modules for evaluation in its preprints server as a new advance and development of the technologies it manages. Available in Spanish only. Read More →

Research and scholarly communication, AI, and the upcoming legislation

AI risk pyramid: at the bottom of the pyramid is minimal risk, above is limited risk, followed by high risk and at the top of the pyramid is unacceptable risk.

Can AI be used to generate terrorist “papers”, spread deadly viruses, or learn how to make nuclear bombs at home? Is there legislation that can protect us? It looks like international regulation is on the way. Read More →

AI: How to detect chatbox texts and their plagiarism

Plagiarism diagram. The diagram consists of a drawing of three sheets of paper with text, one next to the other, followed below by a red arrow pointing down to a sheet of paper with text on which some passages are highlighted in red.

The ChatGPT-3 application is consulted on four topics under discussion for the production of academic texts acceptable to scientific journal editors. Each question is followed by the answer given by the OpenAI application itself and then by our evaluation, consulting recent sources published on the Internet. Finally, some (human) reflections are presented which, like all things, are subject to discussion or changes brought about by advances in technology. Read More →

ChatGPT and other AIs will transform all scientific research: initial reflections on uses and consequences – part 2

Image of an orange and pink coral-like formation, generated by Google DeepMind

In this second part of the essay, we seek to present some risks that arise particularly in the use of generative AI in the scientific field and in academic work. Although all the problems have not been fully mapped out, we have tried to offer initial reflections to support and encourage debate. Read More →

ChatGPT and other AIs will transform all scientific research: initial thoughts on uses and consequences – part 1

Image of a silhouette of a human head made up of colored threads generated by Google DeepMind

We discuss some possible consequences, risks, and paradoxes of the use of AIs in research, such as potential deterioration of research integrity, possible changes in the dynamics of knowledge production and center-periphery relations in the academic environment. We concluded by calling for an in-depth dialog on regulation and the creation of technologies adapted to our needs. Read More →

Artificial Intelligence and research communication

Watercolor of Alan Turing generated by Midjourney AI

Are chatbots really authors of scientific articles? Can they be legally responsible, make ethical decisions? What do scientific societies, journal editors and universities say? Can their results be included in original scientific articles? Based on recent contributions hereby presented, we’ll be publishing posts that will try to answer these questions and any new ones that arise. Read More →

GPT, machine translation, and how good they are: a comprehensive evaluation

Schematic showing the direct translation and transfer translation pyramid.

Generative artificial intelligence models have demonstrated remarkable capabilities for natural language generation, but their performance for machine translation has not been thoroughly investigated. A comprehensive evaluation of GPT models for translation is presented, compared to state-of-the-art commercial and research systems, including NMT, tested with texts in 18 languages. Read More →

It takes a body to understand the world – why ChatGPT and other language AIs don’t know what they’re saying [Originally published in The Conversation in April/2023]

Photograph of a white and silver robot holding a tablet in front of a luggage store. In the background, in the hallway, two people are walking with their backs to the camera.

Large language models can’t understand language the way humans do because they can’t perceive and make sense of the world. Read More →