Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Choosing Between Human and Algorithmic Advisors: The Role of Responsibility Sharing
1
Zitationen
3
Autoren
2022
Jahr
Abstract
Abstract Algorithms are increasingly employed to provide accurate advice across domains, yet in many cases people tend to prefer human advisors, a phenomenon termed algorithm aversion. To date, studies have focused mainly on the effects of advisor’s perceived competence, ability to give accurate advice, on people’s willingness to accept advice from human and algorithmic advisors and to arbitrate between them. Building on studies showing differences in responsibility attribution between human and algorithmic advisors, we hypothesize that the ability to psychologically offload responsibility for the decision’s potential consequences on the advisor is an important factor affecting advice takers’ choice between human and algorithmic advisors. In an experiment in medical and financial domains (N = 806), participants were asked to rate advisors’ perceived responsibility and choose between a human and algorithmic advisor. Our results show that human advisors were perceived as more responsible than algorithmic advisors and that the perception of the advisor’s responsibility affected the advice takers’ choice of advisor. Furthermore, we found that an experimental manipulation that impeded advice takers’ ability to offload responsibility affected the extent to which human, but not algorithmic, advisors were perceived as responsible. Together, our findings highlight the role of responsibility sharing in shaping algorithm aversion.
Ähnliche Arbeiten
The emotional dog and its rational tail: A social intuitionist approach to moral judgment.
2001 · 7.790 Zit.
Social Psychology of Intergroup Relations
1982 · 7.749 Zit.
Implicit social cognition: Attitudes, self-esteem, and stereotypes.
1995 · 6.292 Zit.
A study of normative and informational social influences upon individual judgment.
1955 · 4.689 Zit.
The global landscape of AI ethics guidelines
2019 · 4.617 Zit.