Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
CONSTRUCTION OF IMPROVED LIME PREDICTIVE MODEL FOR THE MULTIPLE HEALTHCARE DATA SOURCES
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Explainable Artificial Intelligence (XAI) ensures understandable and transparent outcomes of complex AI models to humans. Current XAI application often face model-agnostic overlays cause unstable or overly local explanations. Therefore, this paper presents a novel Probability-aware Local Interpretable Model-agnostic Explanations (P-LIME) model provides a proper trade-off between complex AI predictions and human-understandable explanations in healthcare settings. P-LIME incorporates probability-weighted perturbation by combining two specific weights includes proximity-based weights and black-box model confidence, respectively. The exponential kernel including euclidean distance to compute weights of the perturbed sample (i.e., closer to original receives high weights and vice versa). The prediction probability of the perturbed samples generated by the complex model where higher confidence contributes more weight to building local explanations. This dual weighting ensures that explanations focus on samples that are both relevant (close to the original data) and trustworthy (model is confident about the prediction). Experimental analysis is carried out to validate the performance of the P-LIME model by using three distinct datasets namely (i) Electronic healthcare Recorder (EHR) dataset, (ii) IoT-based health monitoring system dataset, and (iii) MIMIC-III clinical dataset, respectively. Comparative analysis reveals that the outcome of the proposed P-LIME model shows better performance than other state-of-the-art methods. The proposed P-LIME achieves: accuracy (EHR-92.5%, IoT-93.8%, and MIMIC-III-91.7%), fidelity score (EHR-91.3%, IoT-92.0%, and MIMIC-III-90.1%), interpretability score (EHR-0.87, IoT-0.89, and MIMIC-III-0.88), and computation time (EHR-3.1s, IoT-3.5s, and MIMIC-III-3.8s), respectively.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.