OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 18:57

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Letters from nobody: The problem of <scp>AI</scp> ‐written Letters to the Editor

2025·2 Zitationen·Headache The Journal of Head and Face PainOpen Access
Volltext beim Verlag öffnen

2

Zitationen

2

Autoren

2025

Jahr

Abstract

If Academy Award winners were selected by artificial intelligence (AI), would we still watch the awards ceremony? If you were on trial for an alleged crime, would you want AI to replace a jury of your peers? Would you want AI as the sole arbiter of where your child gets accepted into college? Chances are that the answer to at least some of these questions would be “no.” There are some things that are inherently subjective and rely on human sensibility, judgment, and expertise. These decisions and judgments cannot be staffed out to AI or large language models, and when they are, they risk losing their value and integrity. In science, our ideas are judged by our peers. Scientific manuscripts undergo a rigorous peer review process before being published in journals. This review process generally must be conducted by humans, although some journals are exploring cautious use of AI as an assistive tool.1 For grant applications, the National Institutes of Health forbids reviewers from using AI to review grants2 and considers grant applications “either substantially developed by AI or containing sections substantially developed by AI” as not “the original ideas of applicants” and not eligible for funding.3 Judgment regarding a grant or scientific manuscript's merit needs to come from fellow expert scientists in the field or other human stakeholders. Once a manuscript is published, Letters to the Editor become an important avenue for scientists to continue the discussion and debate about specific articles. Experts in the field can write a letter critiquing the manuscript or share new insights they gained from reading it. The authors of the original manuscript are typically given the opportunity to respond. This published academic discourse helps move science forward with the ultimate goal of making the next study better. In clinical medicine, this process of ongoing critique and contextualization that Letters to the Editor provide is particularly important. Well-trained clinicians and clinician scientists may examine the same data and draw different conclusions about what it means for the best care of patients. Letters to the Editor serve as a safety check, a hypothesis incubator, and a recalibrator for the future scientific agenda. Distinct from other sources of written engagement such as social media comments or blog-type posts, Letters to the Editor go through editorial review, are bylined with the real names of authors, and become part of the written scientific record indexed to the original manuscript for posterity. At Headache, we have observed a concerning trend that many letters seem to be written, or at least substantially written, by AI. Other medical journals have also experienced this.4 At Headache, these letters seem to share common features: (1) they are submitted within only a few days of the original manuscript's publication (either online or in an issue), (2) a PubMed search of the authors' names reveals that they have recently published multiple Letters to the Editor in different journals on different topics, and (3) generally the authors have not published any previous academic work in headache medicine or related fields. Without high-specificity detection software, it is not possible to prove definitively that these letters were written by AI, only to suspect that they were. Perhaps this behavior could be forgiven if the AI-written letters were making novel, insightful points that helped advance clinical medicine; however, they are generally trifling and tend to miss the point of the original study. For example, in response to a real-world evidence chart review study, the letter might argue that conducting a randomized controlled trial would have been a more effective approach. No one would disagree with that statement, but randomized controlled trials take years to conduct and are expensive. In the meantime, real-world observational evidence can provide valuable insights at little cost, and the authors of the original study have generally already acknowledged in the original manuscript the limitations inherent to their study design. Suspected AI-written letters tend to lack clinical common sense or research acumen. They nitpick rather than elevate, all while gumming up editorial office efficiency. When AI-written letters slip through and are published, they clutter the field. Once identified as AI-written, Letters to the Editor may be retracted.5 Retractions cause reputational harm to journals4 and can give journal editors pause about whether we should continue to publish letters at all.6 Since the release of ChatGPT in November 2022, Headache's editorial leadership team has met regularly to discuss generative AI in scholarly publishing. Headache permits the use of AI in preparing manuscripts as long as (1) the use is declared and described, and (2) the editorial team agrees that the use of AI was appropriate for the declared purpose. When it comes to writing Letters to the Editor or other inherently subjective works such as Perspective pieces, it is not appropriate to use AI to generate ideas. The suspected AI-written letters submitted to Headache contained either no disclosure about use of AI or stated AI was used solely for improving language and grammar. Based on 2024 Committee of Publication Ethics guidance,7 AI cannot meet the criteria for authorship as it cannot take responsibility for content. Moreover, being an author requires a substantial contribution to the intellectual content of the work. If AI develops most or all “intellectual content,” yet cannot be an author, the manuscript is functionally authorless, a “letter from nobody.” As a still fully human editorial team, we believe that scientific and medical publishing is a critical area where humans need to take a stand about intellectual integrity in medicine and the scientific process. Although we celebrate technology as a tool to assist authors with language, grammar, and flow and as an increasingly powerful analytical tool, using AI to generate an onslaught of “letters from nobody” betrays AI's potential to support scholarly publishing and the advancement of medicine. Amy A. Gelfand: Conceptualization; writing – original draft; writing – review and editing. Jenn Vallimont: Writing – review and editing. In the last 24 months, Amy A. Gelfand has received royalties from UpToDate (for authorship), and honoraria from Elsevier (for authorship), the American Academy of Neurology (for editing) and the Weill Cornell Neurology Department, Kobenhavns Unversitet, and the College Board (for speaking). She receives a stipend from the American Headache Society for her role as Editor of Headache. She receives grant support from PCORI as a member of the Steering Committee for the REACH study and from the UCSF Resource Allocation Program as an investigator. She is also supported by a generous philanthropic donation made by Nathalie and Nicolas Giauque to the UCSF Child & Adolescent Headache Program. Jenn Vallimont receives payment from the American Headache Society for her role as managing editor of Headache.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationDiversity and Career in MedicineRadiology practices and education
Volltext beim Verlag öffnen