OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 16:48

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

S205 Assessing the Ability of ChatGPT 4.0 as an Educational Tool for Patients With Pancreatic Cysts

2024·0 Zitationen·The American Journal of Gastroenterology
Volltext beim Verlag öffnen

0

Zitationen

9

Autoren

2024

Jahr

Abstract

Introduction: As imaging technology continues to improve, pancreatic cysts are becoming increasingly diagnosed. ChatGPT is an artificial intelligence (AI) large language model (LLM) designed for conversational and interactive dialogue. Patients newly diagnosed with pancreatic cysts often have specific questions to inquire about their condition. Our study aims to evaluate the ability of ChatGPT 4.0 in providing high-quality, medically accurate, and comprehensible responses to common patient questions about pancreatic cysts. Methods: Fourteen common FAQs regarding pancreatic cysts were compiled and entered into ChatGPT 4.0 to generate responses. Three board-certified attending gastroenterologists with expertise in management of pancreatic cysts were recruited to produce written responses to the FAQs. Both sets of answers were then anonymized, randomized and distributed to a group of 3 blinded patients with diagnosis of pancreatic cyst. Using a 5-point Likert scale, the patients were asked to rate responses based on quality, clarity, and empathy, and to indicate which of the 2 responses they preferred. Furthermore, respondents were asked to designate if each answer was physician or AI-generated. Results: When comparing quality of responses, there was no significant difference in the scores between gastroenterologists (3.57) and ChatGPT (4.00), P = 0.056. The patient panel found gastroenterologist responses easier to understand (4.31) compared to ChatGPT responses (3.86), P = 0.0159. There was no significant difference in empathy scores between the physician responses (3.31) and ChatGPT responses (3.14) with P = 0.18. Patients preferred ChatGPT responses 57% of the time. The panel was able to correctly identify AI responses in most cases (83.3%). Conclusion: The burden on physicians with answering in-basket messages from patients is growing larger. Responding to questions about a new diagnosis of pancreatic cysts from a patient can be especially time consuming. This study showed that ChatGPT was able to generate responses to common questions surrounding pancreatic cysts of similar quality and empathy to those of human gastroenterologists. In fact, patients in this study preferred the ChatGPT responses over gastroenterologist responses. As LLMs continue to evolve, this current iteration of ChatGPT may already have the potential to be utilized in specific clinical scenarios, such as pancreatic cysts, to streamline patient communication and to help educate patients about their diagnosis (Figure 1).Figure 1.: Comparison between ChatGPT vs Gastroenterologist Responses.

Ähnliche Arbeiten