Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI in Healthcare: Improving Transparency and Trust in Image-Based Predictions
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Deep learning models have been quite good at interpreting medical images in recent years, but their lack of transparency makes it hard to use them in clinical settings. This study looks at how to use Explainable Artificial Intelligence (XAI) tools, specifically Grad-CAM and Image LIME, to make the diagnostic evaluation of Chest X-Ray (CXR) photos clearer and more understandable. By adding post-hoc explanation methods to a convolutional neural network (CNN) framework, this study aims to improve feature attribution and decision routes that are relevant to clinical outcomes. It trained the model using annotated chest X-ray datasets and then used common metrics like accuracy and loss to see how well it worked. Grad-CAM helped people understand model predictions by highlighting important pixels in specific areas, whereas Image LIME gave people clear, localized information about model predictions. A side-by-side evaluation of both methods showed that they each had strengths in terms of interpretability and therapeutic relevance. This study shows that combining quantitative performance indicators with visual and contextual explanations greatly increases medical professionals' confidence, making AI use in healthcare diagnostics safer and more reliable.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.689 Zit.
Generative Adversarial Nets
2023 · 19.895 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.320 Zit.
"Why Should I Trust You?"
2016 · 14.535 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.194 Zit.