Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring Trust and Mistrust Dynamics: Generative <scp>AI‐Curated</scp> Narratives in Health Communication Media Content Among Gen X
2
Zitationen
5
Autoren
2025
Jahr
Abstract
Large language models, generative adversarial networks (GANs), and variational autoencoders (VAEs) are basic technologies used in interfaces like Chat Generative Pre-Trained Transformer (a textual content creator) and DALL-E 2 (a text-to-image creator), poised to revolutionize the way users access and understand health information. The rapid uptake and investment in these technologies suggest they will be transformative, yet their implications for health communications remain unclear. In this viewpoint, we present a research study measuring individual trust using a previously established trust scale and examining the impact of displaying disclaimers on trust in content generated by artificial intelligence (AI). The results of data analysis using SmartPLS indicate that the three components of trust have a positive impact on individual trust. Semi-structured interviews further reinforce these findings. This study sheds light on the adoption of new information technologies, focusing on how generative AI tools such as large language models, GANs, and VAEs may alter the production and consumption of health information. We explore how these technologies may influence the content people encounter, the blending of marketing and misinformation with evidence, and the factors that influence trust.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.