Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Maximizing impact of explainable artificial intelligence in radiotherapy: a critical review
0
Zitationen
7
Autoren
2025
Jahr
Abstract
<i>Objective.</i>Artificial intelligence (AI) can enable automation, improve treatment accuracy, allow for a more efficient workflow, and improve the cost-effectiveness of radiotherapy (RT). To implement AI in RT, clinicians have expressed a desire to understand the AI outputs. Explainable AI (XAI) methods have been put forward as a solution, but the multidisciplinary nature of RT complicates the application of trustworthy and understandable XAI methods. The objective of this review is to analyze XAI in the RT landscape and understand how XAI can best support the diverse user groups in RT by exploring challenges and opportunities with a critical lens.<i>Approach</i>. We performed a review of XAI in RT, evaluating how explanations are built, validated, and embedded across the RT workflow, with attention to XAI purposes, evaluation and validation, interpretability trade-offs, and RT's multidisciplinary context.<i>Main results</i>. XAI in RT serves five purposes: (1) knowledge discovery, (2) model verification, (3) model improvement, (4) clinical verification, and (5) clinical justification/actionability. Many studies favor interpretability but neglect fidelity and seldom include user-specific evaluation. Key challenges include stakeholder diversity, evaluation of XAI, cognitive bias, and causality; we also outline opportunities.<i>Significance</i>. By linking XAI purposes to RT tasks and highlighting challenges and opportunities, we provide actionable recommendations and a user-centric framework to guide the development, validation, and deployment of XAI in RT.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.305 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.204 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.103 Zit.