OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 09:58

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The corruptive force of AI-generated advice

2021·9 Zitationen·Kölner Universitäts PublikationsServer (Universität zu Köln)Open Access
Volltext beim Verlag öffnen

9

Zitationen

5

Autoren

2021

Jahr

Abstract

Artificial Intelligence (AI) is increasingly becoming a trusted advisor in people's lives. A new concern arises if AI persuades people to break ethical rules for profit. Employing a large-scale behavioural experiment (N = 1,572), we test whether AI-generated advice can corrupt people. We further test whether transparency about AI presence, a commonly proposed policy, mitigates potential harm of AI-generated advice. Using the Natural Language Processing algorithm, GPT-2, we generated honesty-promoting and dishonesty-promoting advice. Participants read one type of advice before engaging in a task in which they could lie for profit. Testing human behaviour in interaction with actual AI outputs, we provide first behavioural insights into the role of AI as an advisor. Results reveal that AI-generated advice corrupts people, even when they know the source of the advice. In fact, AI's corrupting force is as strong as humans'.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIPsychology of Moral and Emotional JudgmentArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen