Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Justice, trust, and moral judgements when personnel selection is supported by algorithms
26
Zitationen
4
Autoren
2023
Jahr
Abstract
Although algorithm-based systems are increasingly used as a decision-support for managers, there is still a lack of research on the effects of algorithm use and more specifically on potential algorithmic bias on decision-makers. To investigate how potential social bias in a recommendation outcome influences trust, fairness perceptions, and moral judgement, we used a moral dilemma scenario. Participants (N = 215) imagined being human resource managers responsible for personnel selection and receiving decision-support from either human colleagues or an algorithm-based system. They received an applicant preselection that was either gender-balanced or predominantly male. Although participants perceived algorithm-based support as less biased, they also perceived it as generally less fair and had less trust in it. This could be related to the finding that participants perceived algorithm-based systems as more consistent but also as less likely to uphold moral standards. Moreover, participants tended to reject algorithm-based preselection more often than human-based and were more likely to use utilitarian judgements when accepting it, which may indicate different underlying moral judgement processes.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.502 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.855 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.376 Zit.
Fairness through awareness
2012 · 3.266 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.