Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
P717 Evaluating the performance of Large Language Models in responding to patients' health queries: A comparative analysis with medical experts
1
Zitationen
9
Autoren
2024
Jahr
Abstract
Abstract Background Patients with chronic diseases exhibit a heightened interest in seeking health information, and access to high-quality information can positively impact clinical outcomes. While previous research on static internet text/video information has highlighted concerns about low-barrier creation leading to low-quality content, it remains uncertain whether similar issues persist in responses generated by Large Language Models (LLMs). Assessing the ability of LLMs in responding to medical queries provides valuable insights for their application in healthcare settings. Methods In alignment with open science principles, we utilized real patient queries from the China Crohn's and Colitis Foundation (CCCF) series "Questions and Answers on Ulcerative Colitis and Crohn's Disease." The dataset comprised questions posed by patients and corresponding answers from medical professionals, collected from outpatient visits and online social media. In September 2023, 263 patient questions were sequentially input into ChatGPT-3.5 (August 3, 2023 version), and the resulting responses were compiled alongside the original medical professional responses, forming 263 modules. Three Inflammatory Bowel Disease (IBD) specialist physicians and three IBD patients were invited to assess each module. Evaluators were instructed to: 1) choose their preferred response version, and 2) provide a multidimensional Likert 5-point subjective assessment using a crowdsourcing strategy. Additionally, the CRIE 3.0 team conducted an automated objective analysis of Simplified Chinese readability. Results Mann-Whitney U tests on text readability levels (median: 7th grade for both medical professionals and ChatGPT responses; Q1: 6th grade; Q3: 8th grade) revealed no significant difference (p=0.87), suggesting ChatGPT's performance align well with recommended literacy levels for popular science publications and is comparable to the average education level in China. Conclusion Cautiously interpreting our findings, ChatGPT's preliminary performance appears comparable to specialized IBD physicians, indicating its potential utility in patient community Q&A. Integrating ChatGPT or similar LLMs into the drafting or refinement stages of health texts is feasible. However, due to the presence of AI hallucinations and the consensus in most experimental conclusions, direct use of large language models for patient Q&A services is not recommended. Recognizing the variability in health information understanding between medical professionals and patients can enhance patient education efforts.
Ähnliche Arbeiten
Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES)
2004 · 6.073 Zit.
The content validity index: Are you sure you know what's being reported? critique and recommendations
2006 · 6.009 Zit.
Health literacy and public health: A systematic review and integration of definitions and models
2012 · 5.771 Zit.
Low Health Literacy and Health Outcomes: An Updated Systematic Review
2011 · 5.170 Zit.
Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century
2000 · 4.903 Zit.
Autoren
Institutionen
- Zhejiang Chinese Medical University(CN)
- Second Affiliated Hospital of Zhejiang University(CN)
- Zhejiang University(CN)
- Hangzhou Medical College(CN)
- Huaian First People’s Hospital(CN)
- Second People’s Hospital of Huai’an(CN)
- Nanjing Medical University(CN)
- Xuzhou Medical College(CN)
- National Taipei University of Education(TW)
- National Taiwan University of Science and Technology(TW)