OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 01:50

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Pop-Out Effect of Rarer Occurring Stimuli Shapes the Effectiveness of AI Explainability

2024·5 Zitationen·Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volltext beim Verlag öffnen

5

Zitationen

4

Autoren

2024

Jahr

Abstract

Explainable artificial intelligence (XAI) is proposed to improve transparency and performance by providing information about AI’s limitations. Specifically, XAI could support appropriate behavior in cases where AI errors occur due to less training data. These error-prone cases might be salient (pop-out) because of their naturally rarer occurrence. The current study investigated how this pop-out effect influences explainability’s effectiveness on trust and dependence. In an online experiment, participants ( N = 128) estimated the contamination degree of bacterial stimuli. The lower occurrence of error-prone stimuli was indicated by one of two colors. Participants either knew about the error-prone color (XAI) or not (nonXAI). Contrary to earlier research without salient error-prone trials, explainability did not help participants follow correct recommendations in non-error-prone trials but helped them correct AI’s errors in error-prone trials. However, explainability still led to over-correction in correct error-prone trials. This poses the challenge of implementing explainability while mitigating its negative effects.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen