Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Turning Off Your Better Judgment – Conformity to Algorithmic Recommendations
3
Zitationen
2
Autoren
2023
Jahr
Abstract
In many practical settings, humans rely on artificial intelligence algorithms to facilitate decision processes: from choosing driving routes and picking leisure activities to defining investment plans and guiding judicial procedures. Given that algorithms are known to be vulnerable to systematic errors and biases, this research seeks to understand: To what degree do humans take algorithmic advice at face value, even when such advice might be expected to conflict with their better judgment? To address this question, we draw from classic studies of social conformity and explore the extent to which gig-economy workers engaged in simple and objective perceptual-judgment (image classification) tasks conform to algorithmic recommendations that are clearly erroneous. Results from three studies (n = 1,085) show that substantial percentages of workers follow erroneous algorithmic recommendations (on 8.8%–26.5% of tasks); workers who do not view such recommendations do not make similar mistakes independently (only on about 1% of tasks). Moreover, workers are more likely to conform to erroneous algorithmic recommendations than to identical recommendations generated by other humans. We further show that workers become less likely to conform to an erroneous algorithmic recommendation when it is presented alongside a second, conflicting (correct) recommendation. Finally, conformity diminishes when workers perceive the real-life impact of their decisions as high (versus low). Given our realistic setup, our findings are directly applicable to workers engaged in hybrid machine–human judgment tasks, in addition to providing broader insights into the nature of human reliance on algorithms—and the risks it might entail.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.