Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Cross-Sectional Comparison of Patient Information Guides Generated by ChatGPT Versus Google Gemini for Alzheimer’s Disease, Parkinsonism, and Migraine
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Introduction This study aims to compare the characteristics of educational brochures produced by two large language models for common neurological diseases such as migraine (MIG), Parkinson's disease, and Alzheimer's disease (AD). Despite the enthusiasm surrounding these technologies, there remains a critical need to systematically investigate their effectiveness, usability, and impact within healthcare contexts. This cross-sectional study investigates patient education brochures for AD, Parkinsonism, and MIG, emphasizing the emerging role of AI-driven tools, such as ChatGPT and Google Gemini. Methods Utilizing a patient information brochure approach, we compared responses generated by ChatGPT and Google Gemini, which, at the time of the study, were the two well-known and well-developed AI tools, by using the prompt "This cross-sectional study investigates patient education brochures for Alzheimer's disease, Parkinsonism, and migraine, emphasizing the emerging role of AI-driven tools, such as ChatGPT and Google Gemini." Readability and reliability were assessed using the Flesch-Kincaid calculator and Modified DISCERN Score, respectively. Statistical analysis was conducted using R software version 4.3.2. Results The results show no significant differences in mean word and sentence counts between the models, although Google Gemini produced shorter texts with fewer sentences (p = 0.04). Both models had similar average words per sentence (p = 0.97) and syllables per word (p = 0.28), but Google Gemini's texts were slightly more complex (ease score p = 0.29). Google Gemini's outputs were also more original, with lower similarity scores (p = 0.04). Pearson correlation coefficients indicated a moderate negative, though statistically insignificant, relationship between ease and reliability scores for both models. Conclusions While Google Gemini produced shorter and potentially more original content, no significant superiority of one AI tool over the other was observed, suggesting the need for ongoing refinement to optimize patient education materials for neurological conditions.
Ähnliche Arbeiten
Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES)
2004 · 6.070 Zit.
The content validity index: Are you sure you know what's being reported? critique and recommendations
2006 · 6.004 Zit.
Health literacy and public health: A systematic review and integration of definitions and models
2012 · 5.770 Zit.
Low Health Literacy and Health Outcomes: An Updated Systematic Review
2011 · 5.168 Zit.
Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century
2000 · 4.899 Zit.