OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 05.05.2026, 05:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Trusting Generative AI for Health Advice: A Pre-Registered Survey Experiment (Preprint)

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

<sec> <title>BACKGROUND</title> Generative artificial intelligence (AI) systems are increasingly used for health information seeking, yet it remains unclear how the public evaluates AI-generated health advice relative to guidance from credentialed clinicians in digital environments. Understanding the conditions under which AI is perceived as credible is critical as these systems become integrated into digital health ecosystems. </sec> <sec> <title>OBJECTIVE</title> This study examined how source type (a human nurse in an online portal, a healthcare-specialized “AI Nurse,” or ChatGPT, a general-purpose chatbot), message characteristics, contextual risk, values framing, and individual differences in medical skepticism and experience with AI shape credibility evaluations of the provided advice and its purported source. </sec> <sec> <title>METHODS</title> In a preregistered online experiment, a national sample of U.S. participants (N=1502) was randomly assigned to one of three source conditions and evaluated health advice across three scenarios: low risk (dietary advice for cholesterol), high risk (chest pain triage), and a morally sensitive scenario (egg freezing). Advice type (intuitive vs counterintuitive) was manipulated in the risk scenarios, and ideological framing (neutral, conservative-leaning, liberal-leaning) was manipulated in the morally sensitive scenario. Primary outcomes included participants’ perceived credibility of the advice and beliefs about whether the patient should follow it. Source-level perceptions of competence and benevolence were also assessed. Medical skepticism and prior AI experience were examined as moderators. </sec> <sec> <title>RESULTS</title> Advice attributed to a human nurse was rated as more credible than advice attributed to either AI source. Message intuitiveness showed effects comparable to and sometimes larger than the effects of source: intuitive advice was perceived as more credible than counterintuitive advice, with this difference amplified in high-risk contexts. In the morally sensitive scenario, ideological framing influenced perceived bias but did not interact significantly with source. Medical skepticism moderated source evaluations: higher skepticism was associated with greater perceived competence of the AI Nurse and lower perceived competence of the human nurse. </sec> <sec> <title>CONCLUSIONS</title> Generative AI is evaluated within existing credibility frameworks rather than dismissed outright as inferior to human expertise. While licensed clinicians retain a credibility advantage, AI-generated advice is generally perceived as competent and legitimate. Importantly, individuals skeptical of traditional medical authority may evaluate AI-based guidance more favorably, suggesting that AI systems may redistribute—rather than uniformly erode—trust in health advice. As AI tools become embedded in patient-facing health platforms, message design and audience characteristics may shape acceptance more strongly than source labeling alone. </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationMisinformation and Its ImpactsDigital Mental Health Interventions
Volltext beim Verlag öffnen