Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mitigating Algorithm Aversion in Recruiting: A Study on Explainable AI for Conversational Agents
8
Zitationen
3
Autoren
2024
Jahr
Abstract
The use of conversational agents (CAs) based on artificial intelligence (AI) is becoming more common in the field of recruiting. Organizations are now adopting AI-based CAs for applicant (pre-)selection, but negative news coverage, especially the black-box character of AI, has hindered adoption. So far, little is known about the contextual factors influencing users' perception of AI-based CAs in general and the effect of provided explanations by explainable AI (XAI) in particular. While research on algorithm aversion provides some initial explanations, information regarding the effects of different XAI approaches on different types of decisions on the attitudes of (potential) applicants is scarce. Therefore, in this study, we use a quantitative, quota-representative study (n = 490) to assess the acceptance of CAs in recruiting. By applying an experimental within-subject design, we provide a more nuanced perspective on why and when providing explanations increases user acceptance. We also show that contextual factors such as the type of assessed skills are major determinants of this effect, and we conclude that XAI is not a "one-size-fits-all approach." Based on the insight that contextual factors of the decision problem are more important than the type of XAI approach itself, we argue that the use and the effects of explainability in recruiting need a more nuanced perspective, focusing on the fit of explanations with the user's characteristics and preferences.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.284 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.233 Zit.
"Why Should I Trust You?"
2016 · 14.179 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.096 Zit.