Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Disentangling (Hybrid) Trustworthiness of Communicative Generative AI as Intermediary for Science-related Information—Results from a Qualitative Interview Study
0
Zitationen
3
Autoren
2025
Jahr
Abstract
The increasing prevalence of communicative Generative AI, such as ChatGPT, highlights their transformative potential for science communication while raising critical questions about users’ trust in these systems conveying science-related information. As perceptual hybrids these agents challenge traditional notions of trustworthiness, and it remains unclear to whom or what users refer to as the object of trust. This qualitative interview study (n = 34) integrates dimensions from human-machine and epistemic trustworthiness within a hybrid framework, complemented by a descriptive source orientation model. It highlights that trustworthiness assessments can extend beyond a chatbot’s interface, emphasizing the perceived salience of its underlying infrastructure, developers, and organizations. By exploring the multifaceted nature of trustworthiness, the study offers a theoretical and empirical contribution to understand how diverse layers contribute to users’ trustworthiness perceptions, particularly in the context of science-related information seeking.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.541 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.395 Zit.
Fairness through awareness
2012 · 3.270 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.