Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Changes in public perception of AI in healthcare after exposure to ChatGPT
2
Zitationen
4
Autoren
2025
Jahr
Abstract
Abstract Background Artificial intelligence (AI) is expected to become an integral part of healthcare services, and the widespread adoption of AI tools in all areas of life is making AI accessible to the general public. Public perception of the benefits and risks of AI in healthcare is key to large-scale acceptance and implementation, and is increasingly influenced by first-hand experiences of AI. The aim of this study was to assess how exposure to ChatGPT changed public perception of AI in healthcare. Methods We used baseline and follow-up data from 5,899 survey participants, who reported their perception of AI in 2022 and 2024, and ChatGPT use in 2024. Administrative and healthcare data from nationwide Danish registers was used for weighting and adjustment. Multinomial multivariate logistic regression was used to model how exposure to ChatGPT use affected changes in perception of AI. Results At baseline (before ChatGPT’s launch) 2,236 individuals (37%) were unsure of the benefits and risks of AI in healthcare, 2,384 (40%) perceived net benefits, 1,083 (18%) perceived benefits and risks as equal, and 196 (3.3%) perceived net risks. At follow-up, 1,195 individuals (20%) had been exposed to ChatGPT use, which was associated with higher odds of changing perception of AI to benefits (OR 3.21 [95% CI: 2.34-4.40]) among individuals who were unsure at baseline, and lower odds of changing to uncertainty from more defined baseline perceptions (from benefits (OR 0.32 [0.24-0.42]), equal (OR 0.47 [0.32-0.69]) and risks (OR 0.27 [0.08-0.98])). Conclusion Exposure to ChatGPT was associated with a change towards positive perception of benefits and risks of AI in healthcare among individuals who were uncertain prior to exposure, and individuals with more defined perceptions of AI were less likely to become uncertain after exposure to ChatGPT.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.