OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 20:20

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Editorial: AI-assisted Letters to the Editor—Scope of a Growing Ethical and Practical Concern, and CORR’s Approach to Managing It

2025·5 Zitationen·Clinical Orthopaedics and Related ResearchOpen Access
Volltext beim Verlag öffnen

5

Zitationen

2

Autoren

2025

Jahr

Abstract

In the quaint beforetimes, which is to say about a year and a half ago, the team here at Clinical Orthopaedics and Related Research® shared our enthusiasm for post-publication dialogue about the research published between our covers [1]. Most of that dialogue has taken the form of letters to the editor from curious readers about the papers published in CORR®, followed by replies to those letters from the authors who published the original work. Because of the importance of post-publication dialogue, we try to take a permissive approach for publishing letters here: When that editorial [1] was published, the current editorial board of this journal—then in its 12th year as a team—had in fact published all the letters we’d received, at a typical volume of nearly 50 letters and replies per year. My, how things change. As of the time we drafted this editorial (June 2025), we had received more letters in the first 6 months of 2025 than we had received in any previous full year. This should have been a boon to celebrate. Instead, it’s a bane: Analyses we performed on those letters found that about half of them were likely written by artificial intelligence (AI)–based large language models (LLMs) like ChatGPT (Fig. 1).Fig. 1: The number of letter to the editor submissions CORR has received, by year, since 2020. The 2025 bar only reflects the first 6 months of the year, and the red portion represents the number of letters that were very likely written using an LLM. (These totals represent submissions of letters to the editor but not replies to those letters, and includes those that were withdrawn or rejected.)Our editorial policy contains no prohibition against authors using LLMs for this purpose, and we have in the past shared our belief that using these tools for translation and language assistance might be a good thing, as it may help democratize scientific dialogue [2]. This still may be true, but as of right now, our observation is that using AI in the scientific writing process seems mainly to be resulting in the mass production of junk. We’re not so naïve as to think we’ve detected all of the AI-written or AI-assisted letters that have been submitted here, but those we have identified and confirmed share several common features: stereotyped phrasings and narrative construction, a seemingly fetishistic habit of placing adjectives and adverbs in unnatural locations, and otherwise-unexplainable compositional similarities that are shared among letters written by entirely different author groups. We’d be able to swallow all of that, perhaps, if not for these other warty features: misattributed quotes, invented references, a near-total absence of actionable insight, and little effort on the part of the letter-writers and their allied bots to engage the authors of the source articles in an informative conversation, or even to ask them questions. We find ourselves in something of a bind. We want to facilitate post-publication dialogue. We recognize that writing is difficult for some people and that this is especially true for those writing in a second language. But we have too much respect for our readers to flood the zone with so much empty content, and we don’t have the time or the appetite to try to determine which references or quotes in a low-content letter may have been invented through a kind of error specific to LLMs called “hallucination.” Sadly, there is no easy fix. We don’t want to disqualify letters merely because AI was used, given that English is not the first language of most people; using LLMs for linguistic help can level what has, in the past, been a very uneven playing field. But there is a big difference between using AI for translation or for putting the final touches on a letter that one has drafted (whether in one’s own language or in another) and simply asking an LLM to produce a letter to the editor in the interest of scoring a quick byline in a leading journal. An early tipoff on this issue came when the same author submitted two letters to the editor—one about an article covering periacetabular osteotomy and one about foot infections—on the same day. We’ve since received several other odd couplings along those lines. Although it’s possible someone could be an expert (or even a curious amateur) in topics as disparate as those, we’ve not yet met such a person. To get a sense for how easy it would be to have a freely available LLM write us a letter, and to confirm our hunches about what the product from such a tool would look like, in June 2025, we prompted ChatGPT (version 4o) to “write a ‘letter to the editor’” about a study that one of our confirmed AI-assisted letters also covered. We provided no other directions to the LLM as part of this experiment. The letter that ChatGPT drafted for us bore numerous structural similarities to the AI-generated letters we’ve received, including its compositional framework and phrasing, as well as the way it performed a mere “compare-and-contrast” of the study in question with other studies without asking specific, thoughtful questions of the study’s authors, which is the main purpose of letters to the editor. The machine-produced letter also came larded with the same tired, hackneyed phrases we’ve seen in so many of these letters. And, most importantly, like many of the AI-assisted or generated letters we’ve received, it contained hallucinated content—in this case, a “fact” it attributed to a source article that was not actually in that article. This simple exercise substantiated our concern that LLMs are being used to write many of the letters to the editors we’ve received and not merely to translate them. We see this as unethical behavior because the underlying ideas, rhetorical structure, and actual words are not those of the letter-writers (who are claiming authorship), but rather of the LLM, and the fact that so many come in with hallucinated references confirms that the (human) authors are not even ensuring that the foundations of the AI tool’s argument are sound. One need not be a surgeon or a scientist to write a letter to the editor to a surgical journal anymore; one need only ask a freely available LLM to do the job. Problematic for many reasons, but none more important to readers than this one: The work produced by this kind of human-machine partnership will be predictable, dull, and uninformative. With this information in hand and with readers’ interests in mind, we feel compelled to modify our letter to the editor process. Going forward, we will ask authors of AI-assisted content—letters to the editor, CORR Insights® commentaries, and articles of other kinds—to provide a verifiable quote from each cited source that substantiates each claim being made. This will facilitate our vetting of references and mitigate the likelihood that hallucinated material will be disseminated. We will only publish letters and commentaries from authors who provide us with this assistance. Further, because of the additional burden the editing of these letters places on our editorial team, we will necessarily apply a more stringent standard in terms of what each letter offers readers when deciding which letters to the editor will be published. Letters that don’t meaningfully engage the authors of the source study in a dialogue will not be considered. In addition, as noted before [2, 3], any use of these tools in the writing, editing, or translation of these letters will need to be disclosed in a way that identifies the tool and the purpose for using it.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen