OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 09:39

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Ensuring peer review integrity in the era of large language models: A critical stocktaking of challenges, red flags, and recommendations

2025·8 Zitationen·European Journal of Radiology Artificial IntelligenceOpen Access
Volltext beim Verlag öffnen

8

Zitationen

5

Autoren

2025

Jahr

Abstract

The rise of large language models (LLMs) like ChatGPT offers promising academic opportunities but also raises peer review concerns. Reviewers increasingly use LLMs to refine language or draft reviews, blurring the line between using artificial intelligence (AI) as a supportive tool and a leading role in the peer review of manuscripts. Given the impracticality of enforcing a ban on reviewers' use of LLMs, this obscurity poses challenges for editorial teams in maintaining peer review integrity. Based on the literature and authors' editorial experience, this brief paper examines the challenges of detecting LLM-shaped reviews, highlights key indicators, and offers recommendations to uphold the rigor of peer review—an imperfect system, yet still the best we have in scholarly publishing. • LLMs may provide significant benefits but can cause major challenges for peer review. • Detecting LLM-shaped reviews remains complex. • Several findings may signal AI involvement. • Clear policies on AI in peer review are essential. • LLMs should assist, not replace, human reviewers.

Ähnliche Arbeiten