OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 20:20

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Performance of Large Language Models (ChatGPT and Gemini Advanced) in Gastrointestinal Pathology and Clinical Review of Applications in Gastroenterology

2025·3 Zitationen·CureusOpen Access
Volltext beim Verlag öffnen

3

Zitationen

4

Autoren

2025

Jahr

Abstract

Introduction Artificial intelligence (AI) chatbots have been widely tested in their performance on various examinations, with limited data on their performance in clinical scenarios. The role of Chat Generative Pre-Trained Transformer (ChatGPT) (OpenAI, San Francisco, California, United States) and Gemini Advanced (Google LLC, Mountain View, California, United States) in multiple aspects of gastroenterology including answering patient questions, providing medical advice, and as tools to potentially assist healthcare providers has shown some promise, though associated with many limitations. We aimed to study the performance of ChatGPT-4.0, ChatGPT-3.5, and Gemini Advanced across 20 clinicopathologic scenarios in the unexplored realm of gastrointestinal pathology. Materials and methods Twenty clinicopathological scenarios in gastrointestinal pathology were provided to these three large language models. Two fellowship-trained pathologists independently assessed their responses, evaluating both the diagnostic accuracy and confidence of the models. The results were then compared using the chi-squared test. The study also evaluated each model's ability in four key areas, namely, (1) ability to provide differential diagnoses, (2) interpretation of immunohistochemical stains, (3) ability to deliver a concise final diagnosis, (4) and explanation provided for the thought process, using a five-point scoring system. The mean, median score±standard deviation (SD), and interquartile ranges were calculated. A comparative analysis of these four parameters across ChatGPT-4.0, ChatGPT-3.5, and Gemini Advanced was conducted using the Mann-Whitney U test. A p-value of <0.05 was considered statistically significant. Other parameters evaluated were the ability to provide a tumor, node, and metastasis (TNM) stage and the incidence of pseudo-references "hallucinations" while citing reference material. Results Gemini Advanced (diagnostic accuracy: p=0.01; providing differential diagnosis: p=0.03) and ChatGPT-4.0 (interpretation of immunohistochemistry (IHC) stains: p=0.001; providing differential diagnosis: p=0.002) performed significantly better in certain realms than ChatGPT-3.5, indicating continuously improving data training sets. However, the mean performances of ChatGPT-4.0 and Gemini Advanced ranged between 3.0 and 3.7 and were at best classified as average. None of the models could provide the accurate TNM staging for these clinical scenarios, with 25-50% citing references that do not exist (hallucinations). Conclusion This study indicated that though these models are evolving, they need human supervision and definite improvements before being used in clinical medicine. This is the first study of its kind in gastrointestinal pathology to the best of our knowledge.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical ImagingMachine Learning in Healthcare
Volltext beim Verlag öffnen