OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.04.2026, 02:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

SP46. Using ChatGPT to Review the Literature: A Cautionary Tale

2025·0 Zitationen·Plastic & Reconstructive Surgery Global OpenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

PURPOSE: ChatGPT has shown impressive results in the medical field, recently matching residents in the Plastic Surgery In-Service Examinations. In academic writing, ChatGPT can generate ideas, organize thinking, and rewrite difficult sections of a paper, although its proper ethical use is still highly debated. With ChatGPT being a potential tool for scientific writing while being barred from authorship in most peer-reviewed journals, we seek to document the abilities of such technologies and consider their appropriate applications in publication. Herein we define the strengths and weaknesses of ChatGPT in writing a literature review on autologous fat grafting. METHODS: ChatGPT-4o (OpenAI, San Francisco, CA, USA) was used to generate a literature review article from ideation to final editing. ChatGPT was asked for three topics within plastic and reconstructive surgery to review, with autologous fat grafting chosen from the provided ideas. ChatGPT was prompted to create an outline and then write each section with corresponding citations. The references were evaluated for accuracy via a person-supervised PubMed search. Final editing was accomplished by asking ChatGPT to match the tone and style of a published narrative review. The writing was compared to published work through a survey of medical professionals. One paragraph was put in two AI detectors, WinstonAI (Montreal, Quebec, CAN) and ZeroGPT (Casper, WY, USA). RESULTS: ChatGPT brainstormed three topics in plastic and reconstructive surgery — biomaterials in tissue engineering, autologous fat grafting, and scar management. Autologous fat grafting was selected and ChatGPT provided a clear outline with subtopics including the application, techniques, and challenges of fat grafting. After prompting, ChatGPT successfully wrote two paragraphs for each section, resulting in a cohesive overview of autologous fat grafting. It then edited the content to match the tone and style of the published narrative review it was provided, making it difficult to distinguish from human authorship. In a survey of trainees, attendings, and researchers, 53% correctly identified the abstract written by ChatGPT. 67% of respondents indicated they would not suspect AI input if the abstract were in a scientific journal. Further analysis of the AI written content revealed vague statements and erroneous citations. Of the 21 citations, 5 were correct, 8 had errors in the citation, and 8 could not be found in PubMed. When asked to summarize an imagined citation, ChatGPT fabricated a study, complete with methods and results. When provided a real citation, ChatGPT misrepresented the results, adding in additional variables and statistical significance. Once provided with the entire paper, ChatGPT generated an accurate summary. CONCLUSIONS: ChatGPT-4o performed well in suggesting scientific topics, generating an organized outline, and editing provided material. Its writing was professional and difficult to distinguish from human-authored material. However, ChatGPT failed to accurately cite existing sources and fabricated entire studies. By leading ChatGPT through a literature review, we have defined successful use cases in academic writing, as well as areas to approach with caution. As with any tool, authors must adhere to the standards of their targeted journal and take full responsibility for all submitted work.

Ähnliche Arbeiten