Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploration of Explainable AI for Trust Development on Human-AI Interaction
4
Zitationen
2
Autoren
2023
Jahr
Abstract
In recent years, the revolutionary impact of Artificial Intelligence (AI) cannot be overstated. This groundbreaking technology has radically transformed how we perform our daily tasks, thereby redefining the very fabric of our society. However, as our reliance on AI systems continues to grow, the need for calibrated trust becomes increasingly pressing. To address this concern, the concept of Explainable AI (XAI) has been introduced to provide human-level explanations. The primary goal is to offer cognitive information that prompts informed trust decisions. Nevertheless, it is essential to recognize that trust is a multidimensional construct, involving various means of processing beyond mere explanations. To fully understand and explore these dimensions within the context of XAI, this research aims to uncover and comprehend the additional facets of trust. Through an exploratory survey, it was confirmed that XAI serves a vital purpose in facilitating trust, and it can be effectively processed through affective means. Furthermore, the presentation of information beyond the depth of explanation was found to play a significant role in moderating trust formation during human-AI interactions.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.