Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From Lay Belief to Advice Adoption: Expectation Violations and Trust in Algorithmic Advisors
0
Zitationen
2
Autoren
2024
Jahr
Abstract
We increasingly employ algorithms to offer advice to employees in various organizational contexts. Yet, existing research has found conflicting evidence regarding how people utilize advice provided by algorithms. Extending algorithmic advice research, our study integrates expectancy violation theory and the trust literature to investigate a set of serial psychological mechanisms linking people’s lay beliefs of algorithms and their adoption of algorithm advice. Specifically, we examine how advisor type (algorithms versus humans) interacts with information type (subjective, e.g., social skills, versus objective, e.g., standard test scores) to affect people’s expectation violations, trust in advice, and their ultimate advice adoption. Through two online experiments, we show that people view algorithm advice based on subjective information as violating their expectations more than that based on objective information. Concurrently, they perceive human advice based on objective information to violate their expectations more. This effect predicts their trust in the advice offered by algorithms and humans, which affects their likelihood of change their decisions based on the advice. These findings highlight the importance of considering user expectations when understanding the influence of algorithm advice on decision making.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.536 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.392 Zit.
Fairness through awareness
2012 · 3.270 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.