Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Is it time for the neurologist to use Large Language Models in everyday practice? (Preprint)
0
Zitationen
10
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Large Language Models (LLMs) such as ChatGPT and Gemini are increasingly explored for their potential in medical diagnostics, including neurology. Their real-world applicability remains inadequately assessed, particularly in clinical workflows where nuanced decision-making is required. </sec> <sec> <title>OBJECTIVE</title> To evaluate the diagnostic accuracy and appropriateness of clinical recommendations provided by ChatGPT and Gemini compared to neurologists using real-world clinical cases. </sec> <sec> <title>METHODS</title> This study consisted of a two-phase approach: (1) a systematic review of the literature on LLMs in neurology diagnosis to assess the adequacy of applied methodologies for clinical translation, and (2) an experimental evaluation of LLMs' diagnostic performance presenting real-world neurology cases to ChatGPT and Gemini, comparing their performance with that of clinical neurologists. The study was conducted simulating a first visit using information from anonymized patient records from the neurology department of the ASST Santi Paolo e Carlo Hospital (Milan, Italy), ensuring a real-world clinical context. A cohort of 28 anonymized patient cases was selected based on routine neurology consultations. These cases covered a range of neurological conditions and diagnostic complexities representative of daily clinical practice. The primary outcome was diagnostic accuracy of both neurologists and LLMs, defined as concordance with discharge diagnoses. Secondary outcomes included the appropriateness of recommended diagnostic tests and the extent of additional prompting required for accurate responses. </sec> <sec> <title>RESULTS</title> Among the 24 studies identified in the literature review, most exhibited heterogeneous methodologies with structured prompts, specifically designed for the interaction with LLMs, but lacked real-world case evaluations. In the experimental phase, neurologists achieved a diagnostic accuracy of 75%, outperforming ChatGPT (54%) and Gemini (46%). Both LLMs demonstrated limitations in nuanced clinical reasoning and over-prescribed diagnostic tests in 17–25% of cases. Additionally, complex or ambiguous cases required further prompting to refine AI-generated responses. </sec> <sec> <title>CONCLUSIONS</title> While LLMs show potential as supportive tools in neurology, they currently lack the depth required for independent clinical decision-making. Future research should focus on refining LLM capabilities and developing evaluation methodologies that reflect the complexities of real-world neurological practice, thus ensuring effective, responsible, and safe use of such promising technologies. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.