Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Leveraging Actionable Explanations to Improve People’s Reactions to AI-Based Decisions
0
Zitationen
2
Autoren
2024
Jahr
Abstract
Abstract This paper explores the role of explanations in mitigating negative reactions among people affected by AI-based decisions. While existing research focuses primarily on user perspectives, this study addresses the unique needs of people affected by AI-based decisions. Drawing on justice theory and the algorithmic recourse literature, we propose that actionability is a primary need of people affected by AI-based decisions. Thus, we expected that more actionable explanations – that is, explanations that guide people on how to address negative outcomes – would elicit more favorable reactions than feature relevance explanations or no explanations. In a within-participants experiment, participants ( N = 138) imagined being loan applicants and were informed that their loan application had been rejected by AI-based systems at five different banks. Participants received either no explanation, feature relevance explanations, or actionable explanations for this decision. Additionally, we varied the degree of actionability of the features mentioned in the explanations to explore whether features that are more actionable (i.e., reduce the amount of loan) lead to additional positive effects on people’s reactions compared to less actionable features (i.e., increase your income). We found that providing any explanation led to more favorable reactions, and that actionable explanations led to more favorable reactions than feature relevance explanations. However, focusing on the supposedly more actionable feature led to comparably more negative effects possibly due to our specific context of application. We discuss the crucial role that perceived actionability may play for people affected by AI-based decisions as well as the nuanced effects that focusing on different features in explanations may have.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.