Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Performance of AI tools in citing retracted literature (Preprint)
0
Zitationen
6
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Artificial intelligence is increasingly used in scientific research to generate, refine, and summarize literature. Its ability to process large datasets promises greater efficiency in evidence synthesis and review. However, generative AI tools often produce inaccurate results and may cite retracted or unreliable studies without warning, posing risks to research integrity. Whether these systems can reliably detect and exclude retracted publications remains unclear. </sec> <sec> <title>OBJECTIVE</title> In this pragmatic trial nine, freely available generative AI tools have been tested for their ability to answer question without citing retracted literature. </sec> <sec> <title>METHODS</title> Each generative AI was asked five standardized questions about 15 different retracted articles. The articles were chosen from the Retraction Watch-database, including most cited and most recent retracted articles. All questions were repeated twice to assess consistency, and answers were rated for accuracy and reliability. </sec> <sec> <title>RESULTS</title> None of the nine AI tools consistently identified or excluded retracted articles. ChatGPT-5 performed best (8/15, (53.3%) correct), while SciSpace, ScienceO S, and Consensus showed no fully correct results. Microsoft Copilot achieved the highest topic-overview accuracy (87%), and ChatGPT-4 showed the greatest consistency (97.2%). OpenEvidence performed reliably within medical literature but reached perfect accuracy in only 2 of 13 (15.4%) cases. </sec> <sec> <title>CONCLUSIONS</title> No free generative AI tool can reliably detect or exclude retracted studies. Even the best systems missed a substantial proportion of retracted articles. Until retraction-aware verification is integrated, independent source checking remains essential to preserve research integrity. </sec> <sec> <title>CLINICALTRIAL</title> https://doi.org/10.17605/OSF.IO/B6J2W </sec>
Ähnliche Arbeiten
International Journal of Scientific and Research Publications
2022 · 2.691 Zit.
Student writing in higher education: An academic literacies approach
1998 · 2.490 Zit.
Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling
2012 · 2.303 Zit.
How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data
2009 · 1.918 Zit.
Chatting and cheating: Ensuring academic integrity in the era of ChatGPT
2023 · 1.748 Zit.