Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Feasibility of ChatGPT-4o in management of gynecologic oncologic patients in the emergency department.
0
Zitationen
11
Autoren
2025
Jahr
Abstract
5614 Background: Recent studies have highlighted the diagnostic and reasoning capabilities of ChatGPT in medicine. This study aims to evaluate the feasibility of using ChatGPT-4o to assist in managing emergency care for gynecologic oncologic patients, focusing on its potential to support physicians and generate patient education materials. Methods: We retrospectively reviewed real cases of gynecologic cancer patients who visited the emergency department of the National Cancer Center in Korea between 2005 and 2024 and identified 15 common cases for evaluation. For each case, four physicians (two gynecologic oncologists and two obstetrics and gynecology residents) assessed the cases based on nine criteria: relevance of differential diagnosis, relevance of suggested necessary examinations, speed in suggesting differential diagnoses and necessary examinations, relevance of examination interpretations, relevance of the final diagnosis, relevance of treatment plans, speed in suggesting the final diagnosis and treatment plans, relevance of prescribed orders, and speed of prescribing orders. Each criterion was scored on a scale of 0, 1, or 2, and total scores were calculated along with the total time taken to generate diagnoses, treatment plans, and actual order prescriptions. The same cases were then evaluated using ChatGPT-4o, with prompts specifically developed to enable consistent assessment. In addition to the nine criteria, ChatGPT-4o was also evaluated on the relevance and speed of patient education, with scores assigned on a scale of 0, 1, or 2. Furthermore, physicians provided feedback on their satisfaction with ChatGPT-4o’s generated answers and patient education materials using the same scale. Results: ChatGPT-4o demonstrated a mean score of 17.1 (range, 14–18) across the 15 cases, outperforming physicians, who achieved a lower mean score of 13.4 (range, 5–17). The mean time taken by ChatGPT-4o to respond to all nine criteria was 108.4 (range, 69–142) seconds, significantly faster than physicians, who required an average of 391.4 (range, 126–786) seconds. For relevance of patient education, ChatGPT-4o achieved a mean score of 1.9 (range, 1–2) across the 15 cases, with response times consistently under 1 minute per cases. Physicians rated their satisfaction with ChatGPT-4o’s generated diagnoses, treatment plans, and order recommendations at a mean score of 1.9 (range, 1–2). Similarly, their satisfaction with ChatGPT-4o’s patient education materials was rated at a mean score of 1.8 (range, 1–2). Conclusions: ChatGPT-4o demonstrates feasibility as a promising supportive tool for managing emergency care in gynecologic oncologic patients, offering fast and relevant diagnoses, treatment plans, and patient education materials. Future research warrants developing practical applications and conducting prospective evaluations to optimize its integration in emergency departments.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.