OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 21:17

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Exploring Trust and Mistrust Dynamics: Generative <scp>AI‐Curated</scp> Narratives in Health Communication Media Content Among Gen X

2025·2 Zitationen
Volltext beim Verlag öffnen

2

Zitationen

5

Autoren

2025

Jahr

Abstract

Large language models, generative adversarial networks (GANs), and variational autoencoders (VAEs) are basic technologies used in interfaces like Chat Generative Pre-Trained Transformer (a textual content creator) and DALL-E 2 (a text-to-image creator), poised to revolutionize the way users access and understand health information. The rapid uptake and investment in these technologies suggest they will be transformative, yet their implications for health communications remain unclear. In this viewpoint, we present a research study measuring individual trust using a previously established trust scale and examining the impact of displaying disclaimers on trust in content generated by artificial intelligence (AI). The results of data analysis using SmartPLS indicate that the three components of trust have a positive impact on individual trust. Semi-structured interviews further reinforce these findings. This study sheds light on the adoption of new information technologies, focusing on how generative AI tools such as large language models, GANs, and VAEs may alter the production and consumption of health information. We explore how these technologies may influence the content people encounter, the blending of marketing and misinformation with evidence, and the factors that influence trust.

Ähnliche Arbeiten