Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The corruptive force of AI-generated advice
9
Zitationen
5
Autoren
2021
Jahr
Abstract
Artificial Intelligence (AI) is increasingly becoming a trusted advisor in people's lives. A new concern arises if AI persuades people to break ethical rules for profit. Employing a large-scale behavioural experiment (N = 1,572), we test whether AI-generated advice can corrupt people. We further test whether transparency about AI presence, a commonly proposed policy, mitigates potential harm of AI-generated advice. Using the Natural Language Processing algorithm, GPT-2, we generated honesty-promoting and dishonesty-promoting advice. Participants read one type of advice before engaging in a task in which they could lie for profit. Testing human behaviour in interaction with actual AI outputs, we provide first behavioural insights into the role of AI as an advisor. Results reveal that AI-generated advice corrupts people, even when they know the source of the advice. In fact, AI's corrupting force is as strong as humans'.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.495 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.853 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.372 Zit.
Fairness through awareness
2012 · 3.265 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.