OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 10.04.2026, 02:08

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Designing effective explainable AI: a human-centered evaluation of explanation formats in financial decision-making

2026·0 Zitationen·Frontiers in Artificial IntelligenceOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

As artificial intelligence (AI) systems are increasingly deployed in high-risk financial decision-making contexts, the demand for transparency and interpretability becomes critical. Explainable AI (XAI) has emerged as a key research domain addressing these needs. While most existing XAI studies emphasize objective quality measures such as correctness and completeness of explanations, they often overlook the role of end-user requirements and the broader ecosystem of stakeholders. This study presents a human-centered evaluation of different visual explanation designs in financial AI applications, assessing their effectiveness. A two-phase mixed-method evaluation was conducted, combining user studies with end-users and a stakeholder workshop, to rank visual prototypes across four explanation types: feature importance, counterfactuals, contrastive/similar examples, and rule-based explanations. A key finding is the divergence between end-users and other stakeholders-including compliance officers, XAI consultants, and developers-with end-users indicating a preference for concise, contextually visual explanations (e.g., small sets of decision rules or risk plots relative to similar cases), while other stakeholders often favor more complete, technically detailed representations. This highlights a critical trade-off between interpretability and completeness. This suggests that visual encoding choices may affect the effectiveness of AI explanations across different stakeholder groups.

Ähnliche Arbeiten