Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Gender, knowledge, and trust in artificial intelligence: a classroom-based randomized experiment
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is increasingly utilized to provide real-time assistance and recommendations across a wide range of tasks in both education and workplace settings, especially since the emergence of Generative AI. However, it is unclear how users perceive the trustworthiness of these tools, particularly given the publicized "hallucinations" that they may experience. We conduct a randomized field experiment in an undergraduate course setting where students perform periodic tests using a digital platform. We analyze how subject characteristics affect trust in AI versus peers' advice. Students are randomly assigned to either a treatment group receiving advice labeled as coming from an AI system or a control group receiving advice labeled as coming from human peers. Our results are in line with recent laboratory experiments documenting algorithm appreciation. However, this effect is moderated by subject characteristics: male and high-knowledge participants place significantly less weight on AI advice compared to peer advice. Notably, these trust patterns persist regardless of advice quality (correct or incorrect). Moreover, our results remain consistent over a four-week period, including after providing performance feedback during the second week, allowing subjects to make more informed trust decisions.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.721 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.884 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.510 Zit.
Fairness through awareness
2012 · 3.302 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.200 Zit.