OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.05.2026, 07:13

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Study of LIME and SHAP Model Explainers for Autonomous Disease Predictions

2022·33 Zitationen
Volltext beim Verlag öffnen

33

Zitationen

6

Autoren

2022

Jahr

Abstract

Autonomous disease prediction systems are the new normal in the health industry today. These systems are used for decision support for medical practitioners and work based on users' health details input. These systems are based on Machine Learning models for generating predictions but at the same time are not capable to explain the rationale behind their prediction as the data size grows exponentially, resulting in the lack of user trust and transparency in the decision-making abilities of these systems. Explainable AI (XAI) can help users understand and interpret such autonomous predictions helping to restore the users' trust as well as making the decision-making process of such systems transparent. The addition of the XAI layer on top of the Machine Learning models in an autonomous system can also work as a decision support system for medical practitioners to aid the diagnosis process. In this research paper, we have analyzed the two most popular model explainers Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) for their applicability in autonomous disease prediction.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Machine Learning in HealthcareArtificial Intelligence in Healthcare
Volltext beim Verlag öffnen