Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Does Explainable Artificial Intelligence Improve Human Decision-Making?
26
Zitationen
5
Autoren
2020
Jahr
Abstract
Explainable AI provides insights to users into the why formodel predictions, offering potential for users to better un-derstand and trust a model, and to recognize and correct AIpredictions that are incorrect. Prior research on human andexplainable AI interactions has typically focused on measuressuch as interpretability, trust, and usability of the explanation.There are mixed findings whether explainable AI can improveactual human decision-making and the ability to identify theproblems with the underlying model. Using real datasets, wecompare objective human decision accuracy without AI (con-trol), with an AI prediction (no explanation), and AI predic-tion with explanation. We find providing any kind of AI pre-diction tends to improve user decision accuracy, but no con-clusive evidence that explainable AI has a meaningful impact.Moreover, we observed the strongest predictor for human de-cision accuracy was AI accuracy and that users were some-what able to detect when the AI was correct vs. incorrect, butthis was not significantly affected by including an explana-tion. Our results indicate that, at least in some situations, thewhy information provided in explainable AI may not enhanceuser decision-making, and further research may be needed tounderstand how to integrate explainable AI into real systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.