OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.03.2026, 02:17

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Use of Interpretable AI Models in Breast Cancer Risk and Detection: A Scoping Review Approach

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Artificial intelligence (AI) has transformed healthcare, with data-driven algorithms gaining popularity for clinical applications. Despite their promise, many such models are perceived as “black boxes” due to their complexity, limiting their trust among clinicians. To address this, explainable AI (XAI) has emerged, offering insights into AI decision-making. This scoping review explores how XAI has been applied to breast cancer detection and risk prediction. A systematic search across Scopus, IEEE Xplore, PubMed, and Google Scholar (top 50 results) was conducted for peer-reviewed literature from January 2017 to July 2023. From this search, 30 relevant studies were identified. The findings show that SHapley Additive exPlanations (SHAP) is the most widely used modelagnostic XAI method in breast cancer studies. SHAP excels in interpreting predictions, identifying biomarkers, and supporting prognosis and survival assessments, particularly in tree-based machine learning models. Its popularity stems from being versatile, easy to implement, and highly compatible with ensemble models. Overall, the adoption of explainable AI improves the transparency, fairness, and clinical utility of AI tools, paving the way for more trustworthy and effective healthcare technologies.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)AI in cancer detectionArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen