Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Learning to fake it: limited responses and fabricated references provided by ChatGPT for medical questions
32
Zitationen
3
Autoren
2023
Jahr
Abstract
Abstract Background ChatGPT have gained public notoriety and recently supported manuscript preparation. Our objective was to evaluate the quality of the answers and the references provided by ChatGPT for medical questions. Methods Three researchers asked ChatGPT a total of 20 medical questions and prompted it to provide the corresponding references. The responses were evaluated for quality of content by medical experts using a verbal numeric scale going from 0 to 100%. These experts were the corresponding author of the 20 articles from where the medical questions were derived. We planned to evaluate three references per response for their pertinence, but this was amended based on preliminary results showing that most references provided by ChatGPT were fabricated. Results ChatGPT provided responses varying between 53 and 244 words long and reported two to seven references per answer. Seventeen of the 20 invited raters provided feedback. The raters reported limited quality of the responses with a median score of 60% (1 st and 3 rd quartile: 50% and 85%). Additionally, they identified major (n=5) and minor (n=7) factual errors among the 17 evaluated responses. Of the 59 references evaluated, 41 (69%) were fabricated, though they appeared real. Most fabricated citations used names of authors with previous relevant publications, a title that seemed pertinent and a credible journal format. Interpretation When asked multiple medical questions, ChatGPT provided answers of limited quality for scientific publication. More importantly, ChatGPT provided deceptively real references. Users of ChatGPT should pay particular attention to the references provided before integration into medical manuscripts.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.