OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.04.2026, 23:44

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

D67. Challenges in Detecting Artificial Intelligence in Scientific Writing: An Analysis of Clinical Reviewers and AI Detection Tools in Plastic Surgery

2025·0 Zitationen·Plastic & Reconstructive Surgery Global OpenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

PURPOSE: The increasing use of artificial intelligence (AI) in academic writing has raised concerns about the integrity of scientific manuscripts. This study aims to evaluate the ability of medical professionals and online AI-detection tools to detect AI involvement in plastic surgery manuscript passages. METHODS: This study analyzed eight passages, with four focused on plastic surgery topics, which were either human-written, human written and AI-edited, or fully AI-generated. Ten raters (medical students, residents, and attending plastic surgeons) classified each passage by origin, and accuracy was assessed. Fleiss’ kappa measured interrater reliability. Three online AI detection tools also analyzed the passages, with intraclass correlation coefficients (ICC) evaluating tool agreement on AI-generated content percentage. RESULTS: Human raters correctly identified the origin of the passages only 25% of the time with no difference in accuracy between plastic surgery and non-plastic surgery topics. Raters correctly identified fully AI-generated passages 27.5% of the time, while entirely human-written passages were correctly identified 20% of the time. Interrater reliability among human raters was 0.304, while ICC across the three online AI detection tools was 0.097. AI detection tools incorrectly classified human-written content as more than 50% AI-generated in two-thirds of ratings. CONCLUSION: This study highlights the difficulty in accurately identifying AI involvement in manuscripts. Human raters showed low accuracy with fair interrater agreement, while AI tools frequently misclassified human-written content as AI-generated and demonstrated poor agreement. These findings indicate a need for better methods to uphold the integrity of plastic surgery and scientific writing.

Ähnliche Arbeiten