Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing Satisfaction in and Understanding of a Collaborative Explainable AI (Cxai) System through User Studies
3
Zitationen
4
Autoren
2022
Jahr
Abstract
Modern artificial intelligence (AI) and machine learning (ML) systems have become more capable and more widely used, but often involve underlying processes their users do not understand and may not trust. Some researchers have addressed this by developing algorithms that help explain the workings of the system using ‘Explainable’ AI algorithms (XAI), but these have not always been successful in improving their understanding. Alternatively, collaborative user-driven explanations may address the needs of users, augmenting or replacing algorithmic explanations. We evaluate one such approach called “collaborative explainable AI” (CXAI). Across two experiments, we examined CXAI to assess whether users’ mental models, performance, and satisfaction improved with access to user-generated explanations. Results showed that collaborative explanations afforded users a better understanding of and satisfaction with the system than users without access to the explanations, suggesting that a CXAI system may provide a useful support that more dominant XAI approaches do not.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.305 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.204 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.103 Zit.