Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Unlocking the Black Box: Explainable Artificial Intelligence (XAI) for Trust and Transparency in AI Systems
62
Zitationen
1
Autoren
2023
Jahr
Abstract
Explainable Artificial Intelligence (XAI) has emerged as a critical field in AI research, addressing the lack of transparency and interpretability in complex AI models. This conceptual review explores the significance of XAI in promoting trust and transparency in AI systems. The paper analyzes existing literature on XAI, identifies patterns and gaps, and presents a coherent conceptual framework. Various XAI techniques, such as saliency maps, attention mechanisms, rule-based explanations, and model-agnostic approaches, are discussed to enhance interpretability. The paper highlights the challenges posed by black-box AI models, explores the role of XAI in enhancing trust and transparency, and examines the ethical considerations and responsible deployment of XAI. By promoting transparency and interpretability, this review aims to build trust, encourage accountable AI systems, and contribute to the ongoing discourse on XAI.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.284 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.233 Zit.
"Why Should I Trust You?"
2016 · 14.179 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.096 Zit.