Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Study of LIME and SHAP Model Explainers for Autonomous Disease Predictions
33
Zitationen
6
Autoren
2022
Jahr
Abstract
Autonomous disease prediction systems are the new normal in the health industry today. These systems are used for decision support for medical practitioners and work based on users' health details input. These systems are based on Machine Learning models for generating predictions but at the same time are not capable to explain the rationale behind their prediction as the data size grows exponentially, resulting in the lack of user trust and transparency in the decision-making abilities of these systems. Explainable AI (XAI) can help users understand and interpret such autonomous predictions helping to restore the users' trust as well as making the decision-making process of such systems transparent. The addition of the XAI layer on top of the Machine Learning models in an autonomous system can also work as a decision support system for medical practitioners to aid the diagnosis process. In this research paper, we have analyzed the two most popular model explainers Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) for their applicability in autonomous disease prediction.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 21.050 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.381 Zit.
"Why Should I Trust You?"
2016 · 14.789 Zit.
Generative adversarial networks
2020 · 13.381 Zit.