OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 14:50

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Editorial: Uses of Generative Artificial Intelligence in Clinical Orthopaedics and Related Research ®—An Update

2024·6 Zitationen·Clinical Orthopaedics and Related ResearchOpen Access
Volltext beim Verlag öffnen

6

Zitationen

8

Autoren

2024

Jahr

Abstract

Last year, Clinical Orthopaedics and Related Research® partnered with three other leading orthopaedic journals—JBJS, BJJ, and JOR—to articulate some preliminary standards about using artificial intelligence (AI) applications in the creation of manuscripts for submission to those journals [7]. In brief, those journals agreed that AI applications cannot be listed as authors, and that any use of AI applications in the research or writing of a manuscript be disclosed in the Methods section and mentioned again in Acknowledgments. We understood then that this dynamic area would evolve quickly, and it’s already time for an update to CORR’s editorial policies on this topic: We can’t and shouldn’t try to play cat-and-mouse. If you follow the news on this topic, you’re aware that new generative AI applications are constantly being created and old ones updated. Because of the potential harms associated with those applications—deepfakes in mainstream media, error and fraud in scientific reporting—tools to detect the use of AI in the creation of text, images, and video also are constantly being created and updated. It seems nearly certain that over time, this arms race will be won by the AI tools that create content rather than new or improved detection tools. We therefore will trust authors to follow our editorial policies about disclosing the use of AI in research and scientific reporting articulated here, in any future editorials, and in our initial essay on the topic [7], as we don’t expect to be able to function effectively as the “content police” over the long term. This is, in a sense, not much different from how CORR and other journals deal with other kinds of scientific fraud. Purposeful fraudsters often evade detection (at least for some time) because peer review was designed to improve research as part of a collaborative, truth-seeking process that involves authors, reviewers, and editors working together. It was never built to catch determined cheaters, and is ill-suited to that task. We recognize there may be benefits to using AI applications in scientific reporting and will try to enable them when appropriate. Human authors are ultimately responsible for the content of their papers. But about half of the papers we publish are from outside the United States, and for the authors of many of these papers, English is not a first language. Some AI-based tools for language translation are excellent and are easy to use. Authors who wish to use an AI-based tool to improve the quality of a paper’s presentation (as many already do [9]) are free to do so, provided that they disclose that use in accordance with our existing policy [7]. Our shared goal is clear reporting, and we favor anything that helps us achieve that goal. For much the same reason, we’re happy when authors use professional manuscript editors (as long as they disclose the help of those entities). That said, authors who wish to use generative AI tools to improve the clarity of scientific reporting must shoulder two additional responsibilities: Because human authors, not machines, are fully responsible for the work they submit, they must take special care to proofread their work if they employ AI tools to help write or edit their manuscripts. Generative AI tools are known to “hallucinate” (that is, deliver inaccurate content and even invent sources that don’t exist), and so authors—humans—must check the content carefully to make sure it represents what the authors want it to represent. This applies equally to text, tables, figures, and video. To minimize the risk that an AI-based tool would plagiarize the work of others (for example, on points of Discussion or background material in an Introduction section), we suggest that authors create an initial draft containing all the points they wish to make, and that they do their best to express those points before engaging with an AI-based application for language help. Recognize, though, that it is still possible for large language models to take the words of others in this process, and that if this is done, the human authors of the work submitted to CORR remain responsible. If you use it, pay careful attention to the references that AI tools generate. This is important, since one recent study found that about 30% of references in some biomedical research contexts may be hallucinated by the AI tool [3]. There is some loss of control when uploading content to generative AI platforms, and this may create copyright complexities. Authors are asked to transfer copyright to our parent society at the time a paper is accepted for publication. When one uploads content to some generative AI tools, that uploaded content becomes part of the tool’s training data. Authors need to know what becomes of the content they’ve uploaded in terms of how it is used by those platforms, since this has implications with respect to copyright law. This is essential for legal copyright transfer. One cannot transfer a copyright that one does not possess. Models that work offline may have special advantages here and are worth considering for this reason. There are special concerns associated with the creation, manipulation, and modification of figures and visual presentations of data. Image manipulation is coming under increased scrutiny as of late, and for good reason [4, 10]. There may be differences between the primary generation of a new image using an AI-based visual tool or application and modification of other existing images. Any use of AI to create or modify figures or other visuals (including video) should be disclosed, and if an image submitted for publication has been manipulated in any way (whether through using AI or other image editing tools or software), that needs to be disclosed, as well. 3. Reviewers for CORRmay not use generative AI applications or tools for any part of the review process. As much as we like clear communication, and although many reviewers are not reviewing in their first language, reviewers must not upload manuscripts they’re reviewing for CORR into generative AI applications or tools. Here’s why: Copyright of the manuscript does not belong to the reviewer, so the reviewer does not have the right to share the work with a third party in this way. The National Institute of Health has articulated related but broader confidentiality issues in its guidance banning the use of AI tools for grant review, raising concerns about “where data are being sent, saved, viewed, or used in the future” [8]. AI tools apply no independent judgment because they possess none. One can ask an AI tool to do a positive review of an article or a negative review, and it’s equally “happy” to do either one. For now, only humans can provide the insight CORR seeks in this process. Critics have surfaced the concern that AI-driven reviews may be biased against viewpoints that swim against the mainstream, since AI tools draw from available information [5]. When asked for specific suggestions to help authors, AI-based tools seem especially prone to “hallucinations” (such as inventing sources that do not exist, or making confident-sounding but incorrect recommendations) [2]. Others have written thoughtfully and at much greater length about why generative AI applications are not fit-to-task for peer review [1]. Finally, a few words on the kinds of papers CORR is interested in publishing on tools using AI, as this question comes up frequently. On this, our standards are more or less the same as for research on any topic, with one half-twist specific to AI: The topic needs to be important (that is, the paper’s findings should be able to change research, practice, or education in meaningful ways) and the research questions need to be novel (that is, the paper should either fill in an important knowledge gap or help settle a key controversy). In this, we consider papers about AI-based tools similarly to other kinds of research we evaluate. Specifically, papers favoring the adoption of a new AI-based tool must provide evidence of demonstrable, practical, real-world benefits—a standard very similar to the one we apply to new clinical tools and approaches [6]. We place special priority on papers that identify specific, unexpected harms and those that present specific, immediately practical benefits. This, too, differs little from how we appraise papers on other themes. Some purely descriptive (“benchmarking”) studies will clear those bars, but most won’t, and that’s especially true if the findings are likely to change with future updates to the tool(s) or application(s) being studied, since those updates come so frequently. We will revisit this evolving topic as often as necessary to update our editorial policies and guidelines in service of authors, readers, and the patients whom they care for.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical ImagingMedical Imaging and Analysis
Volltext beim Verlag öffnen