OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 04:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Human versus artificial intelligence: investigating ability of young academics from research and non-research institutions to identify ChatGPT-generated dental research abstracts

2026·0 Zitationen·Scientific ReportsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

9

Autoren

2026

Jahr

Abstract

The rapid adoption of generative artificial intelligence (AI) tools such as ChatGPT in academic writing raises concerns about research integrity and authorship transparency, including in dentistry. The aim of this study was to investigate whether young dental academicians from research and non-research universities can differentiate original abstracts from ChatGPT-generated abstracts, and to compare their performances, and accuracy with three AI-output detectors, and a similarity detector. In this study, six early-career academicians (≤ 2 years of academic experience) from 6 different universities reviewed 150 dental research abstracts (75 original and 75 ChatGPT-generated) under blinded conditions and assessed abstract quality using a previously developed rubric. The same abstracts were also evaluated using the GPT-2 Output Detector, Writefull GPT Detector, GPTZero, and Turnitin similarity detection. Blinded human reviewers and most AI tools made variable wrong assumptions. Correlation analyses showed significant positive associations between abstract type and all assessment variables, while similarity detection demonstrated an inverse relationship (p < 0.05). Overall, young academicians, regardless of institutional category, had difficulty identifying the origin of AI-generated abstracts, whereas GPTZero showed the highest discrimination accuracy (90.0%). This indicates that early-career status and current level of training/exposure to AI-assisted writing may hold greater significance than the institutional category alone. These findings suggest that relying on human judgment alone is insufficient for identifying AI-assisted academic text and that selected detection tools may support academic integrity safeguards as AI writing technologies continue to evolve.

Ähnliche Arbeiten