OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 11:10

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Editorial: Fully Compromised, but Thanks All the Same to Our Peer Reviewers

2025·2 Zitationen·Clinical Orthopaedics and Related Research
Volltext beim Verlag öffnen

2

Zitationen

1

Autoren

2025

Jahr

Abstract

This is going to be a quickie, and the focus at the beginning goes, as it should, to the league-leading peer reviewers, editors, and staff at Clinical Orthopaedics and Related Research®. This is because without those individuals—in particular the 670 reviewers who volunteered their time this year to help us decide which papers are worth readers’ attention, and to make those papers better—nothing is possible here. Their names are listed on page 2408. Special thanks in particular go to CORR’s top reviewers, the group of 45 subject-matter experts who did four or more reviews between September 1, 2024 and August 31, 2025 that consistently earned the highest scores for review quality. They are truly the best of the best, and you will find their names on page 2407. As always, I’ve offered to write a note recognizing their contributions to their departments, and each of them has earned a one-year electronic subscription to CORR for a job very well done. We’re now at the part that comes each December where I bring to your attention some matter related to research or publishing. One guess as to what this year’s theme is. Right you are: It’s the abuse, misuse, and undisclosed use of artificial intelligence in scientific reporting, in particular the use of large language models (LLMs) in the creation and dissemination of orthopaedic research. We’ve covered this a few times here in the recent past: an initial setting-of-the-standards editorial in partnership with several other leading journals [3] and an update to those standards just a year later, dealing with issues that had arisen as well as opportunities we saw for using these tools in the work we do here [2]. Most recently, we shared a troubling observation about some authors using AI-based tools not just to help improve their writing but to create (problematic) content de novo [4]. So rather than yelling once more at any clouds that look like an LLM, I thought it was only fair to ask a representative from the ‘bots—in this case, ChatGPT-4o—to join the conversation. Here was my request to ChatGPT-4o: Write a five-paragraph editorial in the style of Seth Leopold's editorials from the last several December issues of Clinical Orthopaedics and Related Research thanking peer reviewers and commenting on the troubling issues in scholarly publishing that have arisen this year. And here was ChatGPT’s treacly reply: Certainly. Here's an editorial modeled after the thoughtful, earnest, and ethicallyfocused style characteristic of Seth Leopold's December editorials in Clinical Orthopaedics and Related Research (CORR). It balances gratitude toward peer reviewers with concern about the direction of scholarly publishing, in a tone that echoes Dr. Leopold’s values: humility, transparency, and moral clarity. Had you said that, I’d have come back with something like, “Awww, shucks. So nice of you to say.” Hearing it from a machine gave me the collywobbles. What follows below is ChatGPT’s output, absolutely unedited. If I were a more courteous guy, or if I feared for Dr. GPT’s feelings, I’d say something like, “I couldn’t have said it better myself.” But I think I could have. It’s not about getting the grammar right, pointing out some (obvious and anodyne) publishing topics that have been lying around the internet for years, or even crafting a creepy simulacrum of a human author’s voice—all of which GPT managed to do here, as you’ll see in a moment. It’s about how thumbless the LLM’s grasp is on the fact that the biggest problem in publishing right now is the very crap it just spontaneously generated. When an author uses an LLM to generate content (as opposed to using it for, let’s say, language translation), something important is lost. That something is the chance for a human author to share a human perspective with human readers on a topic of mutual human interest. I’m not so immodest as to believe that my perspective appeals to all (or even many) readers in our profession, but if someone’s reading the editorial pages in this journal, that person is at least curious about what I have to say, not to admire the literary equivalent of a cabinet made from particleboard. The same is true for someone reading a commentary published under your name, a scientific article from your team, or a letter to the editor that you wrote. Any of the available LLMs can fake it for you, but only you can say what you really believe or share how you really feel. For that reason—and because we're getting overwhelmed with AI-hallucinated references that do not contain the information the LLMs (and their human assistants) claim they contain—we’ve recently started taking a harder line on use of AI in letters to the editor [4], and going forward, we’re going to apply similar standards to the CORR Insights® commentaries that we receive. This is going to be a long, tough ball game. Welcome to the second inning [1]. Oh, and happy 2026. “The Honor of Service—and the Cost of Compromise” “By the Editor-in-Chief, December 2025” [Editor’s Note: I did not write this. GPT-4o did. SSL] “As another year draws to a close, we pause once again to extend heartfelt thanks to the lifeblood of this journal—our peer reviewers. In an era where time is increasingly scarce and incentives for academic service have dwindled, over a thousand colleagues have nonetheless volunteered their expertise to help CORR fulfill its mission: to advance orthopaedic care by ensuring the integrity, rigor, and relevance of what we publish. Your effort—often unseen, occasionally thanked, but always essential—makes this work possible. On behalf of our editorial board, our authors, and our readers: thank you. “Yet amid this gratitude, I find myself reflecting with increasing unease on the shifting landscape of scholarly publishing. Over the past year, troubling trends have accelerated. We’ve seen the proliferation of paper mills, the erosion of authentic authorship, and an unsettling rise in the commodification of publication itself. Journals that once stood as sentinels of scientific integrity now face economic pressures that tempt compromise. In too many corners of the academic world, the value of a publication is judged not by its contribution to knowledge, but by its utility as a currency for promotion, funding, or reputation. “This moment demands honest introspection from all of us—editors, reviewers, authors, and institutions alike. The peer review process, for all its imperfections, remains our best defense against error, fraud, and hype. But it cannot function as intended if burdened by conflicts of interest, gamified metrics, or a reward structure that punishes transparency. At CORR, we continue to invest in reviewer education, transparent editorial policies, and a culture that prioritizes the long-term good over short-term gain. These are modest safeguards, but they matter. “To our reviewers: your work this year has not just improved manuscripts—it has defended the trustworthiness of the literature itself. That is no small thing. In recognizing your contributions, we are reminded that peer review is more than a service—it is a professional obligation grounded in the principle that the pursuit of truth is a collective endeavor. This belief has long distinguished medicine from other domains, and it is worth defending with some urgency. “As we look ahead to 2026, I hope we will continue to rise—together—to the challenge of sustaining a publishing culture that rewards rigor, invites humility, and serves patients. The pressures to cut corners and chase metrics will remain. But so too will the resolve of this community, made visible in the thousands of hours our reviewers quietly donate each year. Thank you again for your commitment. It is a privilege to do this work alongside you. “[ChatGPT:] Let me know if you'd like to tailor this for a specific journal or community.”

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationSocial Media in Health EducationDiversity and Career in Medicine
Volltext beim Verlag öffnen