Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Current state and promise of user-centered design to harness explainable AI in clinical decision-support systems for patients with CNS tumors
8
Zitationen
4
Autoren
2025
Jahr
Abstract
In neuro-oncology, MR imaging is crucial for obtaining detailed brain images to identify neoplasms, plan treatment, guide surgical intervention, and monitor the tumor's response. Recent AI advances in neuroimaging have promising applications in neuro-oncology, including guiding clinical decisions and improving patient management. However, the lack of clarity on how AI arrives at predictions has hindered its clinical translation. Explainable AI (XAI) methods aim to improve trustworthiness and informativeness, but their success depends on considering end-users' (clinicians') specific context and preferences. User-Centered Design (UCD) prioritizes user needs in an iterative design process, involving users throughout, providing an opportunity to design XAI systems tailored to clinical neuro-oncology. This review focuses on the intersection of MR imaging interpretation for neuro-oncology patient management, explainable AI for clinical decision support, and user-centered design. We provide a resource that organizes the necessary concepts, including design and evaluation, clinical translation, user experience and efficiency enhancement, and AI for improved clinical outcomes in neuro-oncology patient management. We discuss the importance of multi-disciplinary skills and user-centered design in creating successful neuro-oncology AI systems. We also discuss how explainable AI tools, embedded in a human-centered decision-making process and different from fully automated solutions, can potentially enhance clinician performance. Following UCD principles to build trust, minimize errors and bias, and create adaptable software has the promise of meeting the needs and expectations of healthcare professionals.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.