Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
P-1976. Comparing Clinical Expertise and Chat-GPT in the Management of Septic Shock and Severe Pneumonia: A Pilot Study
0
Zitationen
9
Autoren
2026
Jahr
Abstract
Abstract Background The evolution of artificial intelligence (AI) and large language models (LLMs) offers promising opportunities in infection management. Sepsis identification using AI has been integrated into many Electronic Medical Recording systems and applications for diagnostics and antimicrobial stewardship are emerging. This pilot study assessed ChatGPT-4® as a clinical decision aid for the management of septic shock and severe pneumonia.Flowchart showing methodology of pilot studyTable showing comparison of Physician and Chat-GPT4 performance Methods A retrospective study was conducted at Saint Vincent Hospital, Worcester on 50 cases (2023-2024). Physician-documented investigations and antibiotics were compared with ChatGPT-4® outputs generated using standard prompts with full blinding. Infectious Diseases Society of America guidelines served as the standard. Paired t-tests and McNemar’s test evaluated investigation appropriateness and antibiotic selection accuracy respectively, using Statistical Analysis Software® Version 9.4.Fig 3:Clinical template used to input H&P data into ChatGPT-4 Results Severe Pneumonia: Physicians recommended more appropriate investigations (mean difference 1.45 out of 4 tests) and antibiotics (0.91 more/case) compared to ChatGPT-4®, with no significant difference in MRSA or Pseudomonas coverage. Septic Shock: Physicians outperformed ChatGPT-4® in investigations (mean difference 1 out of 3 tests) and antibiotic choices (1 more/case), with greater accuracy for MRSA coverage; concordance was noted for Multi-Drug Resistant (MDR) organisms. Conclusion Physicians outperformed ChatGPT-4 across both conditions. However, ChatGPT-4® demonstrated comparable pathogen-specific antimicrobial selection and accuracy in MDR coverage, suggesting potential with diagnosis-specific prompting and antimicrobial stewardship. The results highlight the enduring importance of physician-led decision-making in an era increasingly shaped by AI. LLMs still face limitations including data privacy concerns and need for individualized contextual judgement that prevent their autonomous use. However, with refinement and responsible implementation, LLMs may evolve into a trusted aid to enhance physician decision-making, especially in areas with limited access to specialist care. This study has led to a prospective trial exploring ChatGPT-4’s use with real-time, targeted prompts throughout hospitalization. Disclosures All Authors: No reported disclosures
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.