OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 22:48

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Method‐Oriented Review of Explainable Artificial Intelligence for Neurological Medical Imaging

2025·4 Zitationen·Expert Systems
Volltext beim Verlag öffnen

4

Zitationen

3

Autoren

2025

Jahr

Abstract

ABSTRACT The adoption of artificial intelligence (AI) techniques in medical imaging has led to significant improvements in diagnostic performance, particularly in neurological disorders. However, the limited interpretability of deep learning models, often referred to as the “black box” issue, poses substantial challenges in clinical trust, transparency, and regulatory acceptance. Explainable artificial intelligence (XAI) aims to address these limitations by enhancing model transparency and interpretability. This review systematically analysed 77 eligible studies, selected from an initial pool of 108 publications, focusing on XAI applications in neurological medical imaging. The included approaches were categorised into four primary groups: (1) feature visualisation techniques, (2) hierarchical and causal interpretability methods, (3) self‐supervised and federated learning strategies, and (4) dynamic and multimodal interpretability frameworks. Each category was evaluated in terms of technical methodology, clinical applicability, and associated limitations. Feature visualisation methods such as Grad‐CAM offer intuitive visual outputs for imaging data but often lack robustness and reproducibility, while attribution methods such as SHAP provide global or local feature importance—mainly for tabular or structured data—and are less frequently applied to medical images. Hierarchical models, including Layer‐wise Relevance Propagation, provide more detailed insights but face barriers to clinical integration. Federated and self‐supervised learning approaches are increasingly explored for privacy preservation and model generalisation in medical imaging; however, the integration of explainability mechanisms into these frameworks is still at an early stage, and standardised methods for interpretable federated/self‐supervised models remain underdeveloped. Dynamic and multimodal frameworks represent a promising direction for comprehensive model explanation but are still in the early stages of exploration. Despite progress, key challenges persist, including the lack of standardised evaluation metrics, limited clinical validation, and unresolved ethical concerns. Future research should focus on integrating interpretability into model development, establishing benchmark evaluation protocols, and promoting effective human–AI collaboration in clinical workflows.

Ähnliche Arbeiten