Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Artificial Intelligence in Cancer Care: A Domain‐Wise Review of Adoption, Challenges and Opportunities
0
Zitationen
4
Autoren
2026
Jahr
Abstract
ABSTRACT Explainable artificial intelligence (XAI) is emerging as a critical enabler in cancer care, where high‐stakes decisions demand transparency, trust and regulatory accountability. This domain‐wise review systematically synthesises the adoption of XAI across major cancer types, classifying them into highly explored, moderately explored and underexplored categories. Various leading interpretability methods, including SHapley Additive exPlanations (SHAP), Local Interpretable Model‐agnostic Explanations (LIME), Gradient‐weighted Class Activation Mapping (Grad‐CAM) and counterfactual reasoning are critically examined, with their applications evaluated across imaging, genomics and multimodal cancer workflows. Certain persistent challenges such as data scarcity, methodological inconsistency, algorithmic bias and clinical validation gaps are highlighted, with particular focus on underrepresented cancers such as prostate, thyroid and pancreatic malignancies. The robustness and reproducibility of widely adopted XAI tools are evaluated, alongside an analysis of regulatory imperatives under emerging frameworks such as the European Union Artificial Intelligence Act (EU AI Act) and guidance from the United States Food and Drug Administration (FDA). In addition, this review outlines strategic directions through the integration of technical, clinical and ethical dimensions for the development of transparent, reliable and clinically relevant AI systems in cancer.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.488 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.263 Zit.
"Why Should I Trust You?"
2016 · 14.333 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.147 Zit.