OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 00:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Advancing AI Interpretability in Medical Imaging: A Comparative Analysis of Pixel-Level Interpretability and Grad-CAM Models

2025·56 Zitationen·Machine Learning and Knowledge ExtractionOpen Access
Volltext beim Verlag öffnen

56

Zitationen

2

Autoren

2025

Jahr

Abstract

This study introduces the Pixel-Level Interpretability (PLI) model, a novel framework designed to address critical limitations in medical imaging diagnostics by enhancing model transparency and diagnostic accuracy. The primary objective is to evaluate PLI’s performance against Gradient-Weighted Class Activation Mapping (Grad-CAM) and achieve fine-grained interpretability and improved localization precision. The methodology leverages the VGG19 convolutional neural network architecture and utilizes three publicly available COVID-19 chest radiograph datasets, consisting of over 1000 labeled images, which were preprocessed through resizing, normalization, and augmentation to ensure robustness and generalizability. The experiments focused on key performance metrics, including interpretability, structural similarity (SSIM), diagnostic precision, mean squared error (MSE), and computational efficiency. The results demonstrate that PLI significantly outperforms Grad-CAM in all measured dimensions. PLI produced detailed pixel-level heatmaps with higher SSIM scores, reduced MSE, and faster inference times, showcasing its ability to provide granular insights into localized diagnostic features while maintaining computational efficiency. In contrast, Grad-CAM’s explanations often lack the granularity required for clinical reliability. By integrating fuzzy logic to enhance visual and numerical explanations, PLI can deliver interpretable outputs that align with clinical expectations, enabling practitioners to make informed decisions with higher confidence. This work establishes PLI as a robust tool for bridging gaps in AI model transparency and clinical usability. By addressing the challenges of interpretability and accuracy simultaneously, PLI contributes to advancing the integration of AI in healthcare and sets a foundation for broader applications in other high-stake domains.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen