Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Use of Interpretable AI Models in Breast Cancer Risk and Detection: A Scoping Review Approach
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) has transformed healthcare, with data-driven algorithms gaining popularity for clinical applications. Despite their promise, many such models are perceived as “black boxes” due to their complexity, limiting their trust among clinicians. To address this, explainable AI (XAI) has emerged, offering insights into AI decision-making. This scoping review explores how XAI has been applied to breast cancer detection and risk prediction. A systematic search across Scopus, IEEE Xplore, PubMed, and Google Scholar (top 50 results) was conducted for peer-reviewed literature from January 2017 to July 2023. From this search, 30 relevant studies were identified. The findings show that SHapley Additive exPlanations (SHAP) is the most widely used modelagnostic XAI method in breast cancer studies. SHAP excels in interpreting predictions, identifying biomarkers, and supporting prognosis and survival assessments, particularly in tree-based machine learning models. Its popularity stems from being versatile, easy to implement, and highly compatible with ensemble models. Overall, the adoption of explainable AI improves the transparency, fairness, and clinical utility of AI tools, paving the way for more trustworthy and effective healthcare technologies.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.366 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.255 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.122 Zit.