Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How to Facilitate Explainability of AI for Increased User Trust: Results of a Study with a COVID-19 Risk Calculator
0
Zitationen
4
Autoren
2021
Jahr
Abstract
While the market of smart technologies is steadily increasing, there is much research to be done regarding the interaction between human users and Artificial Intelligence (AI) technologies. Specifically, the field of Explainable Artificial Intelligence (XAI) focuses on making AI explainable to users. To provide a user-centered approach to this growing field, this paper describes a study to investigate possible processes and methods. For this purpose, 20 participants were asked to use an AI system that provided them with the results of a personalized COVID-19 risk calculation. The study results indicate that while participants generally seemed to think that the presented results of the system were accurate, only a few said that they would change their behavior after receiving the results, and many asked for additional information to better understand the results. This paper discusses the findings along with possible approaches to increase behavior change in users of smart systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.