Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI and Interpretable Machine Learning: A Case Study in Perspective
98
Zitationen
6
Autoren
2022
Jahr
Abstract
Explainable AI, as the word implies is a type of artificial intelligence which enables the explanation of learning models and focuses on why the system arrived at a particular decision, exploring its logical paradigms, contrary to the inherent black box nature of artificial intelligence. Similarly, machine learning interpretability allows users to comprehend the results of the learning models by providing reasoning for the decisions that it has arrived at. This nature of Explainable AI(XAI) and Interpretable Machine Learning (IML) is particularly helpful in the context of AI applications pertaining to healthcare and medical diagnosis. In this paper, we present a case study wherein we have focused on using the ELI5 XAI toolkit in conjunction with LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) algorithmic frameworks in Python, for determining if a patient is diabetic or not, based on a randomized clinical trial dataset. We also endeavor to point out trends and most vital factors that can help clinicians and researchers in analyzing patient data, in conjunction with machine learning and artificial intelligence outputs. Having explanations for machine learning models allows for higher degree of interpretability and paves the way for accountability and transparency in medical and other fields of data analysis. We explore the aforementioned paradigms in the context of this research paper, paving the way for developing an accountable, transparent and robust data analytics framework using XAI & IML.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 21.022 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.375 Zit.
"Why Should I Trust You?"
2016 · 14.775 Zit.
Generative adversarial networks
2020 · 13.364 Zit.