Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can ChatGPT Function as a Virtual Multidisciplinary Team? A Proof-of-Concept Study in Vascular Malformation Syndromes (Preprint)
0
Zitationen
9
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Sturge-Weber syndrome (SWS) and Klippel-Trenaunay syndrome (KTS) are complex vascular malformation syndromes that require multidisciplinary team (MDT) management. However, the traditional MDT approach faces challenges such as time coordination, geographical barriers, and inefficiencies in cross-disciplinary communication. </sec> <sec> <title>OBJECTIVE</title> This study aims to evaluate ChatGPT’s potential in simulating MDT decision-making for SWS and KTS by comparing its diagnostic and treatment recommendations with traditional MDT conclusions. </sec> <sec> <title>METHODS</title> A case-based proof-of-concept design was employed, retrospectively analyzing MDT records of two SWS and two KTS patients. Clinical data, imaging, and laboratory results were input into ChatGPT, and its outputs were evaluated by two dermatology experts using a 1-5 Likert scale across five dimensions: accuracy, completeness, appropriateness, insight, and safety. </sec> <sec> <title>RESULTS</title> ChatGPT performed well in most dimensions, particularly in appropriateness, but showed occasional uncertainty in handling complex or rare cases, such as in gene-phenotype associations. Inter-rater reliability ranged from negligible to moderate (ICC -0.00 to 0.63), with no significant differences observed between the experts’ ratings (p > 0.05). </sec> <sec> <title>CONCLUSIONS</title> ChatGPT shows strong potential as a tool for simulating MDT decision-making, particularly in diagnostic accuracy, completeness of recommendations, and cross-disciplinary insights. However, it has limitations in managing complex cases and ensuring the feasibility of its recommendations. Future studies with larger sample sizes and multi-center validation are needed to fully assess its clinical value. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.