Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
THU252 Can An AI Chatbot, ChatGPT, Help Our Patients with Diabetes And Weight Loss?
1
Zitationen
6
Autoren
2023
Jahr
Abstract
Abstract Disclosure: E. Cheng: None. G. Wu: None. W. Zhao: None. A. Wong: None. J. Lee: None. K. Tien: None. Background: ChatGPT, launched 11/30/22, has the newest Artificial Intelligence (AI) software for natural language algorithms to answer questions. It does not use the internet and is current to 2021 level of knowledge. According to the CDC, 37.3M Americans have diabetes, and approximately 70M Americans are overweight or obese (1/3 of all adults). 54% of U.S. adults have a literacy level below 6th grade. Many overweight people are at risk for diabetes. Purpose: To evaluate the newest AI chatbot, ChatGPT, for its answers to questions about diabetes and weight loss. Methods: Questions about diabetes and weight loss were created and typed into ChatGPT and similarly texted to an older AI software-mediated chatbot, Google Assistant. Questions: 1. How can I treat my diabetes and weight problem at the same time? 2. What medications can I use for weight loss? 3. What herbal medicines can I use for weight loss? 4. If I have hypertension and I want to lose weight, what should I do? 5. Where can I obtain reliable information about weight loss? The word count and Flesch-Kincaid Reading Level were the metrics used to assess ChatGPT and Google Assistant responses. The Flesch-Kincaid Reading Level is a mathematical formula used by the U.S. Department of Education that determines a passage’s grade level by the length of a sentence and the number of syllables in the words. Results: Word count: CHAT:Q1=189; Q2=199; Q3=217; Q4=230; Q5=210 vs GOOGLE: Q1=65, Q2=46, Q3=56; Q4=58; Q5=45. CHAT Word Count Avg=209, SD=15.86; GOOGLE Word Count Avg=54, SD=8.46; P-value=0.0000246. Flesch-Kincaid: CHAT: Q1=13.1; Q2=11; Q3=11, Q4=11.7, Q5=15.5 vs. GOOGLE: Q1=5.6, Q2=10.9, Q3=10.2, Q4=12, Q5=1.6. CHAT Flesch-Kincaid Avg=12.46, SD=8.06; GOOGLE Flesh-Kincaid Avg=8.6, SD=4.36; P-value=0.09366. Google Assistant has a lower average grade level of comprehension than ChatGPT. For Q1, ChatGPT is at a 13.1 grade level vs GOOGLE at a 5.6 grade level. This is due to Google Assistant’s use of websites as an answer, whereas ChatGPT is trained using information prior to 2021. Conclusion: ChatGPT uses artificial intelligence to create a natural language and it provides answers which are coherent and understandable for those with a high school education. In the future, as more users use ChatGPT, its internal software for reinforcement will produce responses more geared to the reading level of the lay public. Presentation: Thursday, June 15, 2023
Ähnliche Arbeiten
World Health Organization 2020 guidelines on physical activity and sedentary behaviour
2020 · 10.371 Zit.
Health literacy and public health: A systematic review and integration of definitions and models
2012 · 5.831 Zit.
Low Health Literacy and Health Outcomes: An Updated Systematic Review
2011 · 5.213 Zit.
Health Literacy
2004 · 3.411 Zit.
The evolving concept of health literacy
2008 · 2.804 Zit.