Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
When algorithmic managers fail to fulfill their promises: The role of anthropomorphism in shaping justice perceptions
0
Zitationen
4
Autoren
2026
Jahr
Abstract
This study explored how employees perceive distributive justice (i.e., perceived fairness of an outcome) when algorithmic managers fail to fulfill their promises. Moreover, drawing on anthropomorphism theory, we further investigated the moderating role of the anthropomorphism of algorithmic managers. We hypothesized that employees perceive lower distributive justice when their algorithmic manager fails to fulfill transactional rather than relational promises, especially when the manager is not anthropomorphized. We conducted two vignette experiments to test our hypotheses. In both Study 1 (N = 258 employees; Mage = 36.76) and Study 2 (N = 248 employees; Mage = 37.12), a 2 (type of nonfulfilled promises: relational versus transactional) × 2 (algorithmic manager's anthropomorphism: high versus low) between-subjects design was employed. Study 2 (preregistered) further examined the mediating role of the perceived rigidity of the algorithmic manager in the hypothesized relationships. Study 1 showed that employees perceive lower distributive justice when algorithmic managers fail to fulfill transactional (as opposed to relational) promises, especially when managers are not anthropomorphized. In Study 2, we found that under these conditions, algorithmic managers are perceived as more rigid, which in turn is related to lower perceived distributive justice. These findings highlight the benefits of adding human-like traits to algorithmic management systems to reduce negative reactions when such systems fail to fulfill their commitments. However, ethical concerns arise about encouraging employees to treat algorithmic managers as humans, as anthropomorphization may blur boundaries and undermine accountability.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.514 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.386 Zit.
Fairness through awareness
2012 · 3.269 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.