OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 11:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Balancing AI transparency: Trust, Certainty, and Adoption

2025·3 Zitationen·Information Development
Volltext beim Verlag öffnen

3

Zitationen

1

Autoren

2025

Jahr

Abstract

This study examines the non-linear relationship between transparency and AI use intention, challenging the assumption that increased transparency always enhances AI adoption. A web-based experiment with 491 participants across two interactions with AI systems, fake news detection (cognitive) and friending recommendations (social), are conducted to manipulate transparency (real, placebic, or absent) for this objective. Using quadratic regression analysis and threshold analysis, we find an inverted U-shaped effect, where moderate transparency fosters trust and certainty, but excessive transparency leads to cognitive overload and heightened scrutiny, reducing AI adoption. Additionally, the study identifies key causal pathways, demonstrating that transparency influences AI use intention indirectly by enhancing trust and reducing uncertainty, with certainty and trust serving as significant mediators. These findings contribute to Trust Calibration Theory and Cognitive Load Theory, advocating for adaptive transparency models that optimize AI explanations based on user expertise, task complexity, and engagement levels to maximize usability and trust.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen