Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Efficacy of ChatGPT in personalized glucose-lowering strategy development: a clinician-based comparative study
0
Zitationen
10
Autoren
2026
Jahr
Abstract
Background The increasing incidence of diabetes poses a significant burden on healthcare systems. Limited research exists on tools to assist providers in developing personalized glucose-lowering strategies, which could alleviate this pressure and enhance patient outcomes. Objective This study aims to evaluate the capability of ChatGPT-4o in developing personalized glucose-lowering strategies for individuals with diabetes. Methods First, an evaluation of ChatGPT-4o’s performance on China’s qualification examination for attending physicians in endocrinology. Second, a cross-sectional study was conducted, involving the comparison of glucose-lowering strategies formulated by ChatGPT-4o, general practitioners (GPs), and attending physicians (APs) in endocrinology for a set of 30 real-world diabetes cases. Three clinical experts scored blindly the reasonableness of each strategy on a scale, with stratification of cases into three complexity levels (A, B, and C) and evaluation of mean scores for each level. Results ChatGPT-4o successfully passed all sections of the qualification examination with scores above the 60% threshold. In developing glucose-lowering strategies, ChatGPT-4o achieved a mean score comparable to GPs (82.24 ± 9.933 vs 79.83 ± 3.768; p = .317) but lower than APs (82.24 ± 9.933 vs 86.35 ± 4.142; p = .0467). Performance declined with increasing case complexity, with mean scores dropping from 89.90 ± 2.936 for simple cases (A-level) to 76.12 ± 11.93 for complex cases (C-level) (p <.0020). Conclusions ChatGPT-4o performs reliably in generating glucose-lowering strategies for simpler diabetes cases, highlighting its potential to assist community health workers. However, its accuracy in complex cases, especially concerning medication contraindications, requires improvement.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.