Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating Artificial Intelligence (AI)-Generated Patient Education Guides on Epilepsy: A Cross-Sectional Study of ChatGPT and Google Gemini
5
Zitationen
6
Autoren
2024
Jahr
Abstract
Introduction Epilepsy is a chronic disorder that requires patient education for management and to avoid triggers and complications. This study aims to evaluate and compare the effectiveness of two artificial intelligence (AI) tools, ChatGPT (version 3.5, OpenAI, Inc., San Francisco, United States) and Google Gemini (version 1.5, Google LLC, Mountain View, California, United States), in generating patient education guides for epilepsy disorders. Methodology A patient education guide was generated on ChatGPT and Google Gemini. The study analyzed the sentence count, readability, and ease of understanding using the Flesch-Kincaid calculator, examined similarity using the QuillBot plagiarism tool, and assessed reliability using a modified DISCERN score. Statistical analysis included an unpaired T-test where a P-value <0.05 is considered significant. Results There was no statistically significant difference between ChatGPT and Google Gemini in terms of word count (p=0.75), sentence count (p=0.96), average words per sentence (p=0.66), grade level (p=0.67), similarity% (p=0.57), and reliability scores (p=0.42). Ease scores generated by ChatGPT and Google Gemini were 38.6 and 43.6 for generalized tonic-clonic seizures (GTCS), 18.7 and 45.5 for myoclonic seizures, and 22.4 and 55.8 for status epilepticus, respectively, showing Google Gemini generated responses notably better (p=0.0493). The average syllables per word (p=0.035) were appreciably lower for Google Gemini-generated responses, with 1.8 for GTCS and myoclonic, 1.7 for status epilepticus against 1.9 for GTCS, 2 for myoclonic, and 2.1 for status epilepticus for ChatGPT responses. Conclusions A significant difference was seen in only two parameters. Further improvement in AI tools is necessary to provide effective guides.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
Institutionen
- Sardar Patel Medical College(IN)
- Asian Institute of Gastroenterology(IN)
- University of Medicine 2 Yangon(MM)
- University Of Medicine 1 Yangon(MM)
- B. J. Medical College & Sassoon Hospital(IN)
- Government Medical College Bhavnagar(IN)
- Sri Ramachandra Institute of Higher Education and Research(IN)
- University Hospitals of Leicester NHS Trust(GB)