OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.04.2026, 00:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable AI in Healthcare: Improving Transparency and Trust in Image-Based Predictions

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

Deep learning models have been quite good at interpreting medical images in recent years, but their lack of transparency makes it hard to use them in clinical settings. This study looks at how to use Explainable Artificial Intelligence (XAI) tools, specifically Grad-CAM and Image LIME, to make the diagnostic evaluation of Chest X-Ray (CXR) photos clearer and more understandable. By adding post-hoc explanation methods to a convolutional neural network (CNN) framework, this study aims to improve feature attribution and decision routes that are relevant to clinical outcomes. It trained the model using annotated chest X-ray datasets and then used common metrics like accuracy and loss to see how well it worked. Grad-CAM helped people understand model predictions by highlighting important pixels in specific areas, whereas Image LIME gave people clear, localized information about model predictions. A side-by-side evaluation of both methods showed that they each had strengths in terms of interpretability and therapeutic relevance. This study shows that combining quantitative performance indicators with visual and contextual explanations greatly increases medical professionals' confidence, making AI use in healthcare diagnostics safer and more reliable.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)COVID-19 diagnosis using AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen