Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The role of emotion in self-explanations by cognitive agents
27
Zitationen
4
Autoren
2017
Jahr
Abstract
Artificial Intelligence (AI) systems, including intelligent agents, are becoming increasingly complex. Explainable AI (XAI) is the capability of these systems to explain their behaviour, in a for humans understandable manner. Cognitive agents, a type of intelligent agents, typically explain their actions with their beliefs and desires. However, humans also take into account their own and other's emotions in their explanations, and humans explain their emotions. We refer to using emotions in XAI as Emotion-aware eXplainable Artificial Intelligence (EXAI). Although EXAI should also include awareness of the other's emotions, in this work we focus on how the simulation of emotions in cognitive agents can help them self-explain their behaviour. We argue that emotions simulated based on cognitive appraisal theory enable (1) the explanation of these emotions, (2) using them as a heuristic to identify important beliefs and desires for the explanation, and (3) the use of emotion words in the explanations themselves.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.988 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.368 Zit.
"Why Should I Trust You?"
2016 · 14.740 Zit.
Generative adversarial networks
2020 · 13.342 Zit.