What do evaluators gain if they conduct a peer review?

Authors

DOI:

https://doi.org/10.54167/rei.v2i1.1585

Keywords:

scientific journals, scientific articles, peer review, evaluation, compensation, scientific publishing ecosystem, publication models, ethics, integrity

Abstract

Starting from the question: What do evaluators gain if they conduct a peer review? This article begins by explaining the relevance of this process for scientific publishing, describing its importance and associated challenges, which allow establishing that this editorial process is in crisis for various reasons, mainly because of the level of specialization it requires, the conditions that a reviewer should ideally meet, and the lack of experts available and willing to carry out this work, which is usually voluntary. From this context, the answer to this question is presented under three sections related to the possible remuneration and compensation modalities that may exist for researchers who carry out such activity, which can represent some attractive rewards and encourage conducting reviews, but are not exempt from debates and controversies that must be carefully considered journal editors: 1) moral recognition and personal enrichment; 2) remuneration in kind; and 3) the strange case of paying for peer reviews.

Downloads

Download data is not yet available.

Author Biography

Juan D. Machin-Mastromatteo, Universidad Autónoma de Chihuahua

Professor at the Autonomous University of Chihuahua (UACH, Mexico) and member of the National System of Researchers (Level II). Doctor in Information and Communication Science (Tallinn University), Master in Digital Library Learning (Oslo University College; Tallinn University; and Parma University) and Bachelor in Librarianship (Central University of Venezuela). He is a specialist in information literacy, action research, bibliometrics, open access and digital libraries. He has more than 130 scientific publications, has facilitated more than 40 courses and has participated in more than 120 international events as a speaker, panelist, organizer or moderator. He is the Associate Editor of Information Development (SAGE) and the Information Studies Journal (UACH). He is a member of the editorial advisory boards committees of The Journal of Academic Librarianship (Elsevier) and IE Revista de Investigacion Educativa (REDIECH). In Information Development he published, from 2015 to 2020, the column Developing Latin America. In 2019 he created the Juantífico Project: videos on scientific information, research, publication and dissemination. From 2022 he co-hosts InfoTecarios podcast. Since 2023 he publishes the School of Editors section in the Information Studies Journal.

References

Agredo-Machin, D., Romo-González, J. R., Machin-Mastromatteo, J. D., y González-Quiñones, F. (2022). Personality traits as drivers of the scientific production: Information, scientific and academic literacies implications. Communications in Computer and Information Science, 1533, 290-301. https://doi.org/10.1007/978-3-030-99885-1_25

Cabanac, G. [@gcabanac]. (17 de marzo de 2024). #ChatGPT “regenerate response” fingerprint in reviewers' reports: I found some in @MDPIOpenAccess journals, e.g., https://pubpeer.com/publications/E6F750F5DE06F5C90B0455E1AB4563 and https://pubpeer.com/publications/BA15B2C19EFBD3694FB87FBA095AAC. It seems that fancy adjectives are good predictors, too... Is peer review doomed? https://x.com/mishateplitskiy/status/1769433162122232127 [Tweet]. X. https://bit.ly/3xqIMFY

Ease. (2018). Taylor & Francis to pay reviewers in fast track publishing service. https://bit.ly/3XGT6nQ

Elsevier. (2024). Reviewer Hub. https://reviewerhub.elsevier.com

Liang, W., Izzo, Z., Zhang, Y., Lepp, H., Cao, H., Zhao, X., Chen, L., Ye, H., Liu, S., Huang, Z., McFarland, D. A., y Zou, J. Y. (2024). Monitoring AI-modified content at scale: A case study on the impact of ChatGPT on AI conference peer reviews. ArXiv. https://doi.org/10.48550/arXiv.2403.07183

Machin-Mastromatteo, J. D. (2023). Implicaciones y políticas editoriales de la inteligencia artificial. Revista Estudios de la Información, 1(2), 123-133. https://doi.org/10.54167/rei.v1i2.1448 

Meadows, A. (2019). Let’s add peer review information to ORCID records. https://bit.ly/3RLoPk2

Open Researcher and Contributor ID. (s.f.). Peer reviews. https://bit.ly/3zsk7Bc

Publishing with Integrity. [@fake_journals]. (11 de enero de 2022). Hi @NatureNeuro, not your interest in #OpenScience. Could you participate in our one click survey that asks whether reviewers should be paid given the new Accelerated Publication route that has been introduced by @tandfonline. Would also appreciate a RT. https://buff.ly/3f5SdOa [Tweet]. X. https://bit.ly/4ctOplw

Publons. (2018). 2018 Global state of peer review. https://bit.ly/3zisI9D

Singh Chawla, D. (2024). Is ChatGPT corrupting peer review? Telltale words hint at AI use. Nature, 628(8008), 483–484. https://doi.org/10.1038/d41586-024-01051-2

Stoop, J. (2020). Introducing Reviewer Hub. https://bit.ly/4eCBHmn

Taylor & Francis. (2022). Accelerated Publication clarification. https://bit.ly/3RMfKY5

Taylor & Francis. (2024). Accelerated Publication. https://bit.ly/3L7rEYJ

Vines, T., y Mudditt, A. (2021). What’s wrong with paying for peer review? The Scholarly Kitchen. https://bit.ly/3xxV0fY

Published

2024-06-30

How to Cite

Machin-Mastromatteo, J. D. (2024). What do evaluators gain if they conduct a peer review?. Revista Estudios De La Información, 2(1), 136–145. https://doi.org/10.54167/rei.v2i1.1585
Metrics
Views/Downloads
  • Abstract
    136
  • PDF (Español)
    46
  • EPUB (Español)
    2
  • HTML (Español)
    0

Issue

Section

School of Editors

Metrics

Similar Articles

<< < 1 2 3 > >> 

You may also start an advanced similarity search for this article.