Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trusting Generative AI for Health Advice: A Pre-Registered Survey Experiment (Preprint)
0
Zitationen
3
Autoren
2026
Jahr
Abstract
<sec> <title>BACKGROUND</title> Generative artificial intelligence (AI) systems are increasingly used for health information seeking, yet it remains unclear how the public evaluates AI-generated health advice relative to guidance from credentialed clinicians in digital environments. Understanding the conditions under which AI is perceived as credible is critical as these systems become integrated into digital health ecosystems. </sec> <sec> <title>OBJECTIVE</title> This study examined how source type (a human nurse in an online portal, a healthcare-specialized “AI Nurse,” or ChatGPT, a general-purpose chatbot), message characteristics, contextual risk, values framing, and individual differences in medical skepticism and experience with AI shape credibility evaluations of the provided advice and its purported source. </sec> <sec> <title>METHODS</title> In a preregistered online experiment, a national sample of U.S. participants (N=1502) was randomly assigned to one of three source conditions and evaluated health advice across three scenarios: low risk (dietary advice for cholesterol), high risk (chest pain triage), and a morally sensitive scenario (egg freezing). Advice type (intuitive vs counterintuitive) was manipulated in the risk scenarios, and ideological framing (neutral, conservative-leaning, liberal-leaning) was manipulated in the morally sensitive scenario. Primary outcomes included participants’ perceived credibility of the advice and beliefs about whether the patient should follow it. Source-level perceptions of competence and benevolence were also assessed. Medical skepticism and prior AI experience were examined as moderators. </sec> <sec> <title>RESULTS</title> Advice attributed to a human nurse was rated as more credible than advice attributed to either AI source. Message intuitiveness showed effects comparable to and sometimes larger than the effects of source: intuitive advice was perceived as more credible than counterintuitive advice, with this difference amplified in high-risk contexts. In the morally sensitive scenario, ideological framing influenced perceived bias but did not interact significantly with source. Medical skepticism moderated source evaluations: higher skepticism was associated with greater perceived competence of the AI Nurse and lower perceived competence of the human nurse. </sec> <sec> <title>CONCLUSIONS</title> Generative AI is evaluated within existing credibility frameworks rather than dismissed outright as inferior to human expertise. While licensed clinicians retain a credibility advantage, AI-generated advice is generally perceived as competent and legitimate. Importantly, individuals skeptical of traditional medical authority may evaluate AI-based guidance more favorably, suggesting that AI systems may redistribute—rather than uniformly erode—trust in health advice. As AI tools become embedded in patient-facing health platforms, message design and audience characteristics may shape acceptance more strongly than source labeling alone. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.561 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.452 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.948 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.