Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Understanding, Visualizing and Explaining XAI Through Case Studies
4
Zitationen
4
Autoren
2022
Jahr
Abstract
Data-driven AI applications require Accountability, Responsibility, and Transparency. At the same time,users' expectations on the accuracy and precision of the predictions are unreasonable. This draws attention to - explainability, an inherent limitation in the traditional intelligent systems that are black box in nature. As Explainable AI(XAI) seeks to bridge this gap through the explanation of AI models, this paper focuses on the study of achieving Responsible AI through XAI. AIX360 is a robust popular tool that is used to provide explainability from various viewpoints. Hence, this work uses AIX 360 to bring explainability in the viewpoint of various stakeholders namely domain experts, service providers and customers. The uniqueness of the paper lies in designing a conceptual model for XAI based predictive analysis. This paper is an attempt to understand, visualize and explain the observations through two data-intensive intelligence systems as case studies HELOC(Home Equity Line Of Credit) and ISIC(International Skin Imaging Collaboration).
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.