Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Balancing AI transparency: Trust, Certainty, and Adoption
3
Zitationen
1
Autoren
2025
Jahr
Abstract
This study examines the non-linear relationship between transparency and AI use intention, challenging the assumption that increased transparency always enhances AI adoption. A web-based experiment with 491 participants across two interactions with AI systems, fake news detection (cognitive) and friending recommendations (social), are conducted to manipulate transparency (real, placebic, or absent) for this objective. Using quadratic regression analysis and threshold analysis, we find an inverted U-shaped effect, where moderate transparency fosters trust and certainty, but excessive transparency leads to cognitive overload and heightened scrutiny, reducing AI adoption. Additionally, the study identifies key causal pathways, demonstrating that transparency influences AI use intention indirectly by enhancing trust and reducing uncertainty, with certainty and trust serving as significant mediators. These findings contribute to Trust Calibration Theory and Cognitive Load Theory, advocating for adaptive transparency models that optimize AI explanations based on user expertise, task complexity, and engagement levels to maximize usability and trust.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.504 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.856 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.377 Zit.
Fairness through awareness
2012 · 3.267 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.