OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 16:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Medical Imaging with Explainable Deep Learning: Improving Clinical Diagnosis Reliability and Interpretability

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

Clinical diagnostics relies on medical imaging to diagnose cancer, cardiovascular, and neurological issues. Deep learning (DL) has improved medical image analysis accuracy and efficiency, speeding diagnosis and improving patient outcomes. Many deep learning models are "black-box" with little transparency and interpretability, especially in essential medical applications. This research combines explainable artificial intelligence (XAI) with deep learning models in medical imaging to increase diagnostic reliability and interpretability. We discuss explainable deep learning frameworks like saliency maps, Class Activation Mapping (CAM), and attention mechanisms to understand how models make decisions by visualising which image portions contribute most to model predictions. Explainable deep learning models can close the transparency gap in medical image analysis, allowing healthcare providers to trust and comprehend the model's findings. Grad-CAM, LRP, and SHAP (Shapley Additive Explanations) help improve the interpretability of convolutional neural networks (CNNs) and other deep learning models. We also examine the practical consequences of these models in clinical domains including radiology, pathology, and dermatology, where proper interpretation and explanation of data are essential for informed treatment decisions. The study also addresses current issues like the model accuracy-interpretability trade-off, the requirement for large annotated datasets for explainable model training, and the integration of explainable deep learning into clinical processes. We conclude with recommendations for future study, including hybrid models that balance performance and explainability and standardised medical imaging explainability measures. This study emphasises the significance of making deep learning models for medical imaging accurate, clear, and interpretable to build trust and improve clinical diagnosis.

Ähnliche Arbeiten