Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI for Clinical Decision Support Systems: Literature Review, Key Gaps, and Research Synthesis
4
Zitationen
6
Autoren
2025
Jahr
Abstract
While Artificial Intelligence (AI) promises significant enhancements for Clinical Decision Support Systems (CDSSs), the opacity of many AI models remains a major barrier to clinical adoption, primarily due to interpretability and trust challenges. Explainable AI (XAI) seeks to bridge this gap by making model reasoning understandable to clinicians, but technical XAI solutions have too often failed to address real-world clinician needs, workflow integration, and usability concerns. This study synthesizes persistent challenges in applying XAI to CDSS—including mismatched explanation methods, suboptimal interface designs, and insufficient evaluation practices—and proposes a structured, user-centered framework to guide more effective and trustworthy XAI-CDSS development. Drawing on a comprehensive literature review, we detail a three-phase framework encompassing user-centered XAI method selection, interface co-design, and iterative evaluation and refinement. We demonstrate its application through a retrospective case study analysis of a published XAI-CDSS for sepsis care. Our synthesis highlights the importance of aligning XAI with clinical workflows, supporting calibrated trust, and deploying robust evaluation methodologies that capture real-world clinician–AI interaction patterns, such as negotiation. The case analysis shows how the framework can systematically identify and address user-centric gaps, leading to better workflow integration, tailored explanations, and more usable interfaces. We conclude that achieving trustworthy and clinically useful XAI-CDSS requires a fundamentally user-centered approach; our framework offers actionable guidance for creating explainable, usable, and trusted AI systems in healthcare.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.