Category: Analysis

Structured questionnaires can make peer review more efficient

Photo of a black and white dartboard with two darts, one yellow and one red. The red dart is at number 6 and the yellow dart is at number 9, closer to the bullseye.

In order to make peer review more efficient, a study proposes adopting a standard form to be answered by reviewers, so that no important aspect of the manuscript’s evaluation goes unnoticed. Available in Portuguese only. Read More →

On preprints, journals, open access and research evaluation: the repercussions of the Gates Foundation’s decision

Photograph of the Bill and Melinda Gates Foundation Visitor Center building in Seattle, Washington, United States.

The Bill and Melinda Gates Foundation recently announced that it will no longer fund APCs for open access journals and is prioritizing the adoption of preprints. A series of recent posts discusses how the Gates Foundation’s announcement has resonated with the scientific community, prompting considerations about open access and its forms of funding, peer review and ultimately, how these changes influence the evaluation and integrity of research. Read More →

AI agents, bots and academic GPTs

Image of a work of art made up of several colored pieces in geometric formation, generated by Google DeepMind

Bots and academic GPTs are based on large language models, such as ChatGPT, designed and sometimes trained for more specific tasks. The idea is that by being specialized, they will deliver better results than “generic” models. This post presents some of the bots and academic GPTs. Available in Portuguese only. Read More →

Is the Bill and Melinda Gates Foundation’s new OA policy the start of a shift towards preprints? [Originally published in the LSE Impact blog in April/2024]

Following the announcement of the Bill and Melinda Gates Foundation’s new open access policy, Richard Sever assesses whether this change signals the beginning of a wider preprint-led open access transition. Read More →

Representing the Humanities collection on the SciELO platform (2022-2023)

In this post, the representatives of the Humanities and Applied Social Sciences Collection on the SciELO platform’s Advisory Committee discuss their work fronts in the 2022-2023 biennium and the challenges that remain for the coming years. Besides issues related to the Open Science Program, we discuss the threats posed to our journals’ sustainability. Read More →

Paper mills

Photo showing several pieces of shredded colored paper.

Paper mills have begun to produce and sell large numbers of low-quality articles with false or plagiarized data. And, more recently, they are trying to entice journal editors by offering generous sums in exchange for the rapid acceptance of articles and by offering questionable editors and reviewers for special issues. Read More →

The influence of implicit biases on the adoption of DEIA principles

Collage made up of overlapping silhouettes of busts on colorful paper

Adherence to the principles of diversity, equity, inclusion and accessibility (DEIA) has been hampered by implicit biases, relating to implicit memory, which influences actions and decisions unconsciously. Progress involves institutional commitment, changing the culture, setting goals, and developing operational strategies. Read More →

Large Language Publishing [Originally published in the Upstream blog in January/2024]

Superimposed photograph of several books with the pages folded into an airplane shape on an infinite black background.

The New York Times ushered in the New Year with a lawsuit against OpenAI and Microsoft. OpenAI and its Microsoft patron had, according to the filing, stolen “millions of The Times’ copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides,” and more—all to train OpenAI’s LLMs. Read More →

How to reformulate scholarly publishing to face the peer review crisis

Scanned image of a group of purple bacteria on a black background.

The time between submission and publication of articles in the field of microbiology has been increasing in recent years. In addition, editors are having to invite more and more reviewers to identify those willing to evaluate manuscripts. What are the implications of this for peer review? Available in Portuguese only. Read More →

Can AI do reliable review scientific articles?

Image of two overlapping screens with words on a purple background generated by Google DeepMind

The cost of reviewing scientific publications, both in terms of money and time spent, is growing to unmanageable proportions with current methods. It is necessary to use AI as a trust system and thus free up human resources for research tasks. It would be important for SciELO to progressively incorporate AI modules for evaluation in its preprints server as a new advance and development of the technologies it manages. Available in Spanish only. Read More →

The scientific community is publishing (much) more and that’s a problem

Photograph showing four stacks of documents filling almost the entire space of the image. You can see the ceiling in the background.

A study posted on arXiv reports an exponential increase in the number of refereed scientific articles published in recent years, disproportionately outstripping the increase in the number of researchers. Scientific output per researcher as author, reviewer, and editor has increased dramatically, a phenomenon referred to as “pressure on scientific publication”, classified as a problem to be identified and resolved. Available in Portuguese only. Read More →

Research and scholarly communication, AI, and the upcoming legislation

AI risk pyramid: at the bottom of the pyramid is minimal risk, above is limited risk, followed by high risk and at the top of the pyramid is unacceptable risk.

Can AI be used to generate terrorist “papers”, spread deadly viruses, or learn how to make nuclear bombs at home? Is there legislation that can protect us? It looks like international regulation is on the way. Read More →

ChatGPT and other AIs will transform all scientific research: initial reflections on uses and consequences – part 2

Image of an orange and pink coral-like formation, generated by Google DeepMind

In this second part of the essay, we seek to present some risks that arise particularly in the use of generative AI in the scientific field and in academic work. Although all the problems have not been fully mapped out, we have tried to offer initial reflections to support and encourage debate. Read More →

ChatGPT and other AIs will transform all scientific research: initial thoughts on uses and consequences – part 1

Image of a silhouette of a human head made up of colored threads generated by Google DeepMind

We discuss some possible consequences, risks, and paradoxes of the use of AIs in research, such as potential deterioration of research integrity, possible changes in the dynamics of knowledge production and center-periphery relations in the academic environment. We concluded by calling for an in-depth dialog on regulation and the creation of technologies adapted to our needs. Read More →

Artificial Intelligence and research communication

Watercolor of Alan Turing generated by Midjourney AI

Are chatbots really authors of scientific articles? Can they be legally responsible, make ethical decisions? What do scientific societies, journal editors and universities say? Can their results be included in original scientific articles? Based on recent contributions hereby presented, we’ll be publishing posts that will try to answer these questions and any new ones that arise. Read More →