Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing Patient Education Guide Generated by ChatGPT vs Google Gemini on Common Hepatology Conditions: A Cross-sectional Study
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Aims and objectives: To compare ChatGPT and Google Gemini-generated patient education guide on hepatitis, cirrhosis, and non-alcoholic fatty liver disease. Introduction: As artificial intelligence (AI) is becoming more integrated into healthcare, assessing the quality of health information it generates is important. This study evaluates patient information guides produced by ChatGPT and Google Gemini for common hepatology conditions, focusing on accessibility, clarity, and comprehensiveness. Methodology: Guides from both AI systems were evaluated using Flesch-Kincaid readability tests, Quillbot for similarity scores, and the DISCERN score for reliability. A quantitative analysis was conducted on various parameters, including word and sentence counts. Results: ChatGPT generated significantly more words and sentences than Google Gemini, indicating more extensive content. However, there were no statistically significant differences in average words per sentence, syllable count, grade level, ease score, similarity percentage, or reliability scores, suggesting comparable complexity and consistency between the two models. Conclusions: The findings underscore the need to refine AI-generated health information to meet diverse patient needs. While AI shows promise in enhancing patient education, continuous evaluation and adaptation are essential to ensure clarity and balance in the information provided. Recommendations include improving content accessibility and reliability for optimal patient engagement. How to cite this article: . Assessing Patient Education Guide Generated by ChatGPT vs Google Gemini on Common Hepatology Conditions: A Cross-sectional Study. Euroasian J Hepato-Gastroenterol 2025;15(2):173-177.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.560 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.451 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.948 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.