OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 10:24

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

CONSTRUCTION OF IMPROVED LIME PREDICTIVE MODEL FOR THE MULTIPLE HEALTHCARE DATA SOURCES

2026·0 Zitationen·Systems and Soft ComputingOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

Explainable Artificial Intelligence (XAI) ensures understandable and transparent outcomes of complex AI models to humans. Current XAI application often face model-agnostic overlays cause unstable or overly local explanations. Therefore, this paper presents a novel Probability-aware Local Interpretable Model-agnostic Explanations (P-LIME) model provides a proper trade-off between complex AI predictions and human-understandable explanations in healthcare settings. P-LIME incorporates probability-weighted perturbation by combining two specific weights includes proximity-based weights and black-box model confidence, respectively. The exponential kernel including euclidean distance to compute weights of the perturbed sample (i.e., closer to original receives high weights and vice versa). The prediction probability of the perturbed samples generated by the complex model where higher confidence contributes more weight to building local explanations. This dual weighting ensures that explanations focus on samples that are both relevant (close to the original data) and trustworthy (model is confident about the prediction). Experimental analysis is carried out to validate the performance of the P-LIME model by using three distinct datasets namely (i) Electronic healthcare Recorder (EHR) dataset, (ii) IoT-based health monitoring system dataset, and (iii) MIMIC-III clinical dataset, respectively. Comparative analysis reveals that the outcome of the proposed P-LIME model shows better performance than other state-of-the-art methods. The proposed P-LIME achieves: accuracy (EHR-92.5%, IoT-93.8%, and MIMIC-III-91.7%), fidelity score (EHR-91.3%, IoT-92.0%, and MIMIC-III-90.1%), interpretability score (EHR-0.87, IoT-0.89, and MIMIC-III-0.88), and computation time (EHR-3.1s, IoT-3.5s, and MIMIC-III-3.8s), respectively.

Ähnliche Arbeiten