Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Use of <scp>ChatGPT</scp> to obtain health information in Australia, 2024: insights from a nationally representative survey
27
Zitationen
3
Autoren
2025
Jahr
Abstract
Since the launch of ChatGPT in 2022,1 people have had easy access to a generative artificial intelligence (AI) application that can provide answers to most health-related questions. Although ChatGPT could massively increase access to tailored health information, the risk of inaccurate information is also recognised, particularly with early ChatGPT versions, and its accuracy varies by task and topic.2 Generative AI tools could be a further problem for health services and clinicians, adding to the already large volume of medical misinformation.3 Discussions of the benefits and risks of the new technology for health equity, patient engagement, and safety need reliable information about who is using ChatGPT, and the types of health information they are seeking. To examine the use of ChatGPT in Australia for obtaining health information, we surveyed a nationally representative sample of adults (18 years or older) drawn from the June 2024 wave of the Life in Australia panel.4 Participants who completed the Life in Australia survey online or by telephone were asked how often they used ChatGPT for health information purposes during the preceding six months, the type of questions they asked, and their trust in the responses. Participants who were aware of ChatGPT but had not used it for health information purposes were asked about their intentions to do so in the following six months. Health literacy was assessed using a validated single-item screener: “If you need to go to the doctor, clinic or hospital, how confident are you filling out medical forms by yourself?“5 Demographic information was derived from previously collected panel data. Residential postcode-based socio-economic standing was classified according to the Index of Relative Socio-economic Advantage and Disadvantage (IRSAD; by quintile).6 Participant responses were weighted to the Australian population using propensity scores. Associations between respondent characteristics and survey responses were assessed using simple logistic regression; we report odds ratios (ORs) with 95% confidence intervals (CIs). Analyses were conducted in SPSS 26. Unless otherwise stated, we report unweighted results (further study details: Supporting Information, part 1). Our study was approved by the University of Sydney Human Research Ethics Committee (2024/HE000247). Of 2951 invited panellists, 2034 completed the three ChatGPT and the health literacy survey items (68.9%). The demographic characteristics of the sample were similar to those of the Australian population (data not shown). The weighted proportion of participants who had heard of ChatGPT was 84.7% (95% CI, 83.0–86.3%). The weighted proportion of participants who had used ChatGPT to obtain health-related information during the preceding six months was 9.9% (95% CI, 8.5–11.4%). The proportion of people who had used ChatGPT to obtain health-related information was larger than their overall respondent proportion for people who were aged 18–44 years, lived in capital cities, were born in non-English speaking countries, spoke languages other than English at home, or had limited or marginal health literacy (Box 1; Supporting Information, table 3). Among the 187 people who asked ChatGPT health-related questions, trust in the tool was moderate (mean score, 3.1 [of 5]; standard deviation [SD], 0.8). Their questions most frequently related to learning about a specific health condition (89, 48%), finding out what symptoms mean (70, 37%), finding actions to take (67, 36%), and understanding medical terms (65, 35%) (Box 2). At least one higher risk question (ie, questions related to taking action that would typically require clinical advice, rather than questions about general health information) had been asked by 115 participants (61%); the proportion was larger for people born in mainly non-English speaking countries than for those born in Australia (OR, 2.62; 95% CI, 1.27–5.39) and for those who spoke a language at home other than English (OR, 2.24; 95% CI, 1.16–4.32) (Supporting Information, table 4). Among the 1523 respondents who were aware of ChatGPT but had not used it for health-related questions during the preceding six months, 591 (38.8%) reported they would consider doing so in the next six months, most frequently for learning about a specific health condition (276 of 1523, 18.1%), understanding medical terms (256, 16.8%), or finding out what symptoms mean (249, 16.3%) (Supporting Information, table 5). At least one higher risk-type question would be considered by 375 participants (24.6%); the proportion was larger for participants with year 12 education or less (OR, 1.76; 95% CI, 1.19–2.61) or an advanced diploma or diploma (OR, 1.67; 95% CI, 1.11–2.51) than for people with postgraduate degrees; for women (OR, 1.34; 95% CI, 1.06–1.70) than for men; and for people aged 35–44 (OR, 2.14; 95% CI, 1.25–3.68), 55–64 (OR, 2.11; 95% CI, 1.21–3.69), or 65 years or older (OR, 2.69; 95% CI, 1.58–4.59) than for people aged 18–24 years (Supporting Information, table 6). On the basis of our exploratory study, we estimate that 9.9% of Australian adults (about 1.9 million people7) asked ChatGPT health-related questions during the six months preceding the June 2024 survey. Given the rapid growth in AI technology and the availability of similar tools,8 this may be a conservative estimate of the use of generative AI services for obtaining health-related information. The number of users is likely to grow: 38.8% of participants who were aware of ChatGPT but had not recently used it for health-related questions were considering doing so within six months. We also found health-related ChatGPT use was higher for groups who face barriers to health care access,9 including people who were born in non-English speaking countries, do not speak English at home, or whose health literacy is limited or marginal. The types of health questions that pose a higher risk for the community will change as AI evolves, and identifying them will require further investigation. There is an urgent need to equip our community with the knowledge and skills to use generative AI tools safely, in order to ensure equity of access and benefit. Julie Ayre and Kirsten McCaffery are supported by National Health and Medical Research Council fellowships (APP2017278, APP2016719). The funders were not involved in study design, data collection, analysis or interpretation, reporting or publication. We acknowledge the contribution of Tara Haynes (Sydney Health Literacy Lab, University of Sydney) to the preparation of the ethics application for this study. Open access publishing facilitated by the University of Sydney, as part of the Wiley – the University of Sydney agreement via the Council of Australian University Librarians. No relevant disclosures. The data underlying this report are available on reasonable request. Supplementary methods and results Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.439 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.315 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.756 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.526 Zit.