OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 10:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Disentangling (Hybrid) Trustworthiness of Communicative Generative AI as Intermediary for Science-related Information—Results from a Qualitative Interview Study

2025·0 Zitationen·Human-Machine CommunicationOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

The increasing prevalence of communicative Generative AI, such as ChatGPT, highlights their transformative potential for science communication while raising critical questions about users’ trust in these systems conveying science-related information. As perceptual hybrids these agents challenge traditional notions of trustworthiness, and it remains unclear to whom or what users refer to as the object of trust. This qualitative interview study (n = 34) integrates dimensions from human-machine and epistemic trustworthiness within a hybrid framework, complemented by a descriptive source orientation model. It highlights that trustworthiness assessments can extend beyond a chatbot’s interface, emphasizing the perceived salience of its underlying infrastructure, developers, and organizations. By exploring the multifaceted nature of trustworthiness, the study offers a theoretical and empirical contribution to understand how diverse layers contribute to users’ trustworthiness perceptions, particularly in the context of science-related information seeking.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen