Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Artificial Intelligence: Bridging the gap between deep learning and human interpretability
0
Zitationen
5
Autoren
2025
Jahr
Abstract
This paper explores the critical role of explainable artificial intelligence (XAI) in bridging the gap between the high performance of deep learning models and the need for human interpretability. It investigates methods that enhance transparency and trust by providing meaningful explanations of complex model decisions, thereby addressing challenges posed by the black-box nature of deep neural networks. The study highlights the importance of developing interpretable AI systems to foster user trust and facilitate the integration of AI into sensitive domains such as healthcare and finance. Ultimately, this research aims to advance the understanding and implementation of XAI to ensure responsible and effective AI deployment in the modern era.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.452 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.258 Zit.
"Why Should I Trust You?"
2016 · 14.307 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.136 Zit.