Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Impact of Explanations for Trustworthy and Transparent Artificial Intelligence
7
Zitationen
4
Autoren
2023
Jahr
Abstract
Trust is a fundamental aspect in the interaction between humans and artificial intelligence (AI). Building and maintaining trust requires designing AI systems that are transparent, explainable and trustworthy, and providing appropriate feedback to users to ensure that they can understand the systems behaviour. This work aims at evaluating the impact of different explanations (local and global) on human’s trust and understanding of a facial expression recognizer. Results show that explanations are appreciated when present, but when no explanations are given, users apply their own mental model on how does the system work and trust it if their experience using it is positive.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.