OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 24.04.2026, 14:34

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

116. AI Showdown: Assessing The Future Of Surgery FAQs - Traditional Pamphlets Vs. ChatGPT, Google SGE, And Meta AI

2024·0 Zitationen·Plastic & Reconstructive Surgery Global OpenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2024

Jahr

Abstract

Purpose: In an era currently defined by rapid advancements in artificial intelligence (AI), the realm of healthcare information dissemination is witnessing a transformative evolution. Patients seeking medical guidance are no longer limited to traditional sources such as hospital brochures and pamphlets. Rather, AI-powered chatbots and search engines have emerged as accessible and rapid sources of information, offering answers to complex medical questions and frequently asked queries (FAQs). In the context of surgical procedures, the importance of accurate and comprehensible information cannot be overstated. One such surgical procedure that requires meticulous patient education is the Deep Inferior Epigastric Perforator (DIEP) flap surgery. Patients considering this procedure often seek detailed explanations and reassurances to their questions. This study aims to determine if well-known AI systems, particularly ChatGPT, and newer chatbots, Google Search Generative Experience (SGE) and Meta AI, can rival or even surpass traditional healthcare information sources, such as hospital flyers, in terms of readability for DIEP flap FAQs. Methods: A set of standardized FAQs was extracted from our institution’s patient flier. The questions covered a wide range of topics related to the DIEP flap surgery, including procedure details, preoperative and postoperative considerations, risks, and benefits. The same set of standardized FAQs was then presented as queries to each of three AI sources, ChatGPT 4.0, Google SGE, and Meta AI. To evaluate the readability of the responses obtained from each source, standardized scoring by Readability Professional Studio software was performed based on five established readability measures. Dunnett’s tests were performed to determine whether the readability of AI-generated responses differed significantly from that of the institution’s flier, as well as amongst the AI systems. Results: When comparing the reading levels averaged from all five readability measures, ChatGPT (14.6 vs. 8.4; p=0.001) and Meta AI (14.7 vs. 8.4; p<0.001) displayed significantly higher reading levels than the hospital flier. Google SGE did not display a statistically significant difference in reading level when compared to the hospital flier (9.9 vs 8.4, p = 0.427). When comparing the newer AI sources (Google SGE and Meta AI) to ChatGPT, Meta AI (p=0.986) did not exhibit a statistically significant difference in reading level while Google SGE did (p = 0.003). Conclusion: Our results suggest that ChatGPT and Meta AI tend to generate responses that may require a more advanced reading comprehension level, potentially posing challenges for individuals with lower literacy or medical knowledge. Conversely, Google SGE may offer responses that align more closely with the readability of information traditionally provided by healthcare institutions. As readability scores represent only one facet of information accessibility, future research is needed to investigate other factors to provide a more comprehensive understanding of how different AI systems will shape the future of patient education.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen