Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A review of Explainable Artificial Intelligence in healthcare
311
Zitationen
16
Autoren
2024
Jahr
Abstract
• Emphasizes the need for transparency to build healthcare professionals' trust in AI systems. • Addresses the critical need for explainability due to potential high-impact consequences of AI errors in healthcare. • Categorizes XAI methods into six groups for healthcare research: feature-oriented, global, concept, surrogate, local pixel-based, and human-centric. • Analyzes the significance of XAI in overcoming healthcare-specific challenges. • Provides an exhaustive review of XAI applications and relevant experimental results in healthcare contexts. Explainable Artificial Intelligence (XAI) encompasses the strategies and methodologies used in constructing AI systems that enable end-users to comprehend and interpret the outputs and predictions made by AI models. The increasing deployment of opaque AI applications in high-stakes fields, particularly healthcare, has amplified the need for clarity and explainability. This stems from the potential high-impact consequences of erroneous AI predictions in such critical sectors. The effective integration of AI models in healthcare hinges on the capacity of these models to be both explainable and interpretable. Gaining the trust of healthcare professionals necessitates AI applications to be transparent about their decision-making processes and underlying logic. Our paper conducts a systematic review of the various facets and challenges of XAI within the healthcare realm. It aims to dissect a range of XAI methodologies and their applications in healthcare, categorizing them into six distinct groups: feature-oriented methods, global methods, concept models, surrogate models, local pixel-based methods, and human-centric approaches. Specifically, this study focuses on the significance of XAI in addressing healthcare-related challenges, underscoring its vital role in safety-critical scenarios. Our objective is to provide an exhaustive exploration of XAI's applications in healthcare, alongside an analysis of relevant experimental outcomes, thereby fostering a holistic understanding of XAI's role and potential in this critical domain.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.246 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.228 Zit.
"Why Should I Trust You?"
2016 · 14.150 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.091 Zit.