OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 23:43

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

LIME-based Explainable AI Models for Predicting Disease from Patient’s Symptoms

2023·6 Zitationen
Volltext beim Verlag öffnen

6

Zitationen

6

Autoren

2023

Jahr

Abstract

In recent years, there has been a significant increase in the use of artificial intelligence (AI) models for predicting disease from patient symptoms. However, these models are often considered black boxes, as they lack transparency in how they make their predictions. This lack of transparency raises concerns about the reliability and trustworthiness of these models. To address this issue, explainable AI (XAI) techniques have been developed to provide insights into how these models work. One such technique is LIME (Local Interpretable Model-agnostic Explanations), which generates explanations for individual predictions by approximating the behavior of the model locally. In this paper, we proposed a novel approach that combines LIME with AI models for predicting disease from patient symptoms. We also applied Recursive Feature Elimination with Cross Validation (RFECV) to diagnose disease from less features. We have shown that this approach provides almost accurate predictions and interpretable explanations for those predictions. The prediction accuracy of 91.57%, 99.59%, 99.59%, 99.59%, 99.59%, and 99.59% have been achieved for Logistic regression, Decision tree, Random forest, Adaboost classifier, Gradient boosting, and Light gradient boosted machine models respectively. Our results suggest that the proposed approach has the potential to improve the trustworthiness and reliability of AI models for predicting disease from patient symptoms.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareCOVID-19 diagnosis using AI
Volltext beim Verlag öffnen