Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Artificial Intelligence (XAI) and Prostate Cancer Diagnosis: A review of current approaches and future directions
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Prostate cancer is one of the most widespread malignancies affecting men globally. This disease had been diagnosed using several ways, such as digital rectal examination (DRE), prostate-specific antigen (PSA) test, and transrectal ultrasound-guided (TRUS) biopsy before the advent of artificial intelligence (AI). The application of artificial intelligence in prostate cancer detection, prognosis, and prediction has improved outcomes.Yet, the black box nature of many AI models limits their clinical adoption due to a lack of transparency and interpretability. This review examines the current trends and applications of explainable artificial intelligence (XAI) in prostate cancer diagnosis, focusing on techniques such as Shapley Additive Explanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM). These methods enhance the interpretability of AI models by clarifying feature contributions and visualising decision-making processes, thereby promoting trust among medical practitioners and supporting more informed clinical decisions. In this review, we highlight that SHAP and Grad-CAM are the most widely used XAI techniques for prostate cancer prediction and also identify important characteristics of XAI. This aids the identification of clinically relevant features and mitigation of model biases. Despite these advancements, this paper reveals the need for further research and wider adoption of XAI techniques for prostate cancer studies to ensure more reliable and transparent AI-driven diagnostics.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.452 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.258 Zit.
"Why Should I Trust You?"
2016 · 14.307 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.136 Zit.