Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clarifying Public Sector Ethics: Neutralization of Police Violence by Citizens and Generative AI
0
Zitationen
6
Autoren
2025
Jahr
Abstract
The increased use of (generative) AI in public services puts pressure on public accountability of decision-making, i.e., responsibility for decisions and their consequences could be attributed to the AI-decision support systems. Hence, questions on the ethicality of the decisions arise. We first use an experimental design to analyze the extent that people neutralize police violence. Second, we compare these results with answers from generative AI (ChatGPT). We assess whether neutralization happens as a result of (1) an AI advised arrest of a passerby, (2) the actual background of the passerby (i.e., whether the passerby had a criminal background or not, as verified by a post-hoc evaluation), and (3) the severity of the injury caused by a violent arrest. We find that both humans and generative AI attribute high importance to the actual background of the passerby (actual criminal or not) to classify behaviors as ethical or unethical. We find that humans do not neutralize police violence as a result of AI-decision support, while in contrast, generative AI regularly and explicitly uses this as an argument to classify behavior as ethical or unethical.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.495 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.853 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.372 Zit.
Fairness through awareness
2012 · 3.265 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.