Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI, Culture, and Trust: A Global Look at User Confidence in Virtual Assistants
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Virtual assistants (VAs) that are driven and powered by AI such as Siri, Alexa, and Google Assistant are increasingly embedded in everyday life. Their adoption is critically a correlation of user trust, which is influenced not only by system performance but also by cultural context. This paper investigates the dynamics of trust in VAs by synthesizing empirical findings from recent studies (n ≈ 1,250 participants across healthcare, consumer, and enterprise domains). We examine four principal antecedents—perceived competence, transparency/explainability, privacy and security, and anthropomorphism—and analyze how cultural dimensions moderate their influence. Findings indicate that competence and privacy consistently drive trust across contexts, but the weight of transparency and anthropomorphism varies by cultural orientation (notably, high uncertainty avoidance cultures demand transparency, while collectivist cultures emphasize social endorsement). We propose a conceptual model linking culture, trust antecedents, and adoption, and conclude with implications for design and governance.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.632 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.550 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.548 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.309 Zit.