Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Curiosity, reassurance and convenience: Understanding public motivations for using Generative AI in symptom assessment
0
Zitationen
7
Autoren
2026
Jahr
Abstract
<title>Abstract</title> <bold>Introduction</bold> Generative artificial intelligence (GenAI) tools such as large language models are Generative artificial intelligence (GenAI) tools such as large language models are increasingly used for health information and symptom appraisal. While diagnostic benchmarking studies are expanding, little population-level evidence explains why individuals use GenAI for symptom assessment or how motivations relate to trust, perceived accuracy and preferences for integration within health systems. <bold>Methods</bold> We analysed data from the 2025 RADIANT VOICES cross-sectional survey of UK adults (n=1449). Participants reported GenAI use, motivations for symptom-related use, trust across use cases and comparative accuracy expectations. Internal consistency supported aggregation of trust (α=0.90) and perceived accuracy (α=0.87) scales. To compare symptom-assessment users with non-users, we applied 1:1 nearest-neighbour propensity score matching (caliper 0.2) across sociodemographic and digital-literacy covariates, achieving standardised mean differences <0.10. Weighted regression models with Benjamini–Hochberg correction were used for outcome comparisons. <bold>Results</bold> Overall, 62.8% reported using GenAI; 49.7% of these had used it to assess symptoms. The most common motivations were curiosity (72.3%), reassurance following a diagnosis (60.7%) and convenience (32.1%). Trust was highest for low-risk informational uses (self-care advice 75.3%) and lowest for urgent-care guidance (38.0%). Participants generally rated GenAI as less accurate than general practitioners (55.8% perceived lower accuracy) but more accurate than traditional non-AI symptom checkers (35.8% perceived higher accuracy). After matching, symptom-assessment users demonstrated higher overall trust (mean difference 0.60, p<0.001), higher perceived accuracy (mean difference 0.30, p<0.001) and greater support for NHS integration (OR=4.06, 95% CI 2.85–5.79). Support for transparency was near-universal (97.0%). <bold>Conclusion</bold> Public use of GenAI for symptom assessment is widespread and primarily motivated by curiosity, reassurance and convenience. Trust is risk-dependent, favouring informational tasks over urgent decisions. Integration strategies should prioritise scope clarity, human oversight and transparency.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.