Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explanation Perspectives from the Cognitive Sciences---A Survey
32
Zitationen
2
Autoren
2020
Jahr
Abstract
With growing adoption of AI across fields such as healthcare, finance, and the justice system, explaining an AI decision has become more important than ever before. Development of human-centric explainable AI (XAI) systems necessitates an understanding of the requirements of the human-in-the-loop seeking the explanation. This includes the cognitive behavioral purpose that the explanation serves for its recipients, and the structure that the explanation uses to reach those ends. An understanding of the psychological foundations of explanations is thus vital for the development of effective human-centric XAI systems. Towards this end, we survey papers from the cognitive science literature that address the following broad questions: (1) what is an explanation, (2) what are explanations for, and 3) what are the characteristics of good and bad explanations. We organize the insights gained therein by means of highlighting the advantages and shortcomings of various explanation structures and theories, discuss their applicability across different domains, and analyze their utility to various types of humans-in-the-loop. We summarize the key takeaways for human-centric design of XAI systems, and recommend strategies to bridge the existing gap between XAI research and practical needs. We hope this work will spark the development of novel human-centric XAI systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 21.050 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.381 Zit.
"Why Should I Trust You?"
2016 · 14.789 Zit.
Generative adversarial networks
2020 · 13.381 Zit.