OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 31.03.2026, 23:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Personalizing explanations in AI-based decisions: The effects of personalization and (mis)aligning with individual preferences

2025·0 Zitationen·Computers in Human BehaviorOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

The increasing reliance on AI-based decision-making in high-stakes contexts underscores the need for transparency and justice. Here, negative outcomes drive individuals affected by AI-based decisions to seek actionable explanations that enable them to realize what they can do to achieve a better future outcome. However, actionability is subjective, varying across individuals and contexts. Personalization of explanations has been proposed to address this variability, but insights on personalized explanation processes, their potential, and challenges are scarce. This paper investigates the impact of personalization and (mis)alignment with individual needs and preferences in explanations for AI-based decisions through an experimental online study simulating denied loan applications. In a within-participants design ( N = 255), participants ranked the actionability of decision-relevant features and experienced five explanation conditions: personalized directive explanations based on the most, second most, or least actionable feature (as ranked by participants); a non-personalized directive explanation highlighting a random feature; and no explanation. In line with justice theory, our results show that any explanation was better than none, and that personalized explanations led to more favorable reactions than non-personalized explanations, enhancing perceptions of justice and attractiveness of the bank. Closer alignment with preferences had only small positive effects, mainly for attractiveness. These findings highlight that even simple ranking-based approaches can make explanations more effective and accessible without requiring technical expertise while cautioning against offering superficial control. This study provides insights into the effects of ranking-based personalization, informing the design of explainability tailored to diverse user needs and addressing ethical and practical considerations in personalization. • Directive explanations after unfavorable AI decisions mitigate negative reactions. • Personalizing explanations enhances explanation effectiveness. • Personalization also bears the risk of offering only superficial control. • Identical explanations may be judged differently based on perceived control.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen