Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI (XAI): Explained
37
Zitationen
2
Autoren
2023
Jahr
Abstract
Artificial intelligence (AI) has become an integral part of our lives; from the recommendations we receive on social media to the diagnoses made by medical professionals. However, as AI continues to grow more complex, the “black box” nature of many AI models has become a cause for concern. The main objective of Explainable AI (XAI) research is to produce AI models that are easily interpretable and understandable by humans. In this view, this paper presents an overview of XAI and its techniques for creating interpretable models, specifically focusing on Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). Furthermore, this paper delves into the various applications of XAI in different domains, including healthcare, finance, and law. Additionally, the ethical and legal implications of using XAI are mentioned. Finally, the paper discusses various challenges and future research directions of XAI.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.