Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clinical decision making by ChatGPT vs medical oncologists: A retrospective concordance study.
2
Zitationen
8
Autoren
2024
Jahr
Abstract
e13634 Background: This research explores the application of AI language models in clinical decision-making within the context of oncology, focusing on the concordance between AI-generated recommendations and actual treatment decisions made by physicians. Leveraging ChatGPT version 4.0, a large language model, we conducted a concordance study using retrospective patient cases from Yeolyan Hematology and Oncology Center. The hypothesis, centered on the alignment of AI chatbots with human decisions, spurred an in-depth exploration of concordance rates. Methods: A total of 228 adult solid malignancy cases from Yeolyan Hematology and Oncology Center (Jan 2020 - Sep 2021) were analyzed, focusing on primary cancers and data from multidisciplinary tumor board meetings. We scored concordance and analyzed the response times as well. Results: The study unveiled an overall 62.3% concordance rate, with varying rates among different cancer types: lung (50%), breast (50%), gastrointestinal (66%), and gynecological (81.8%) cancers. Despite promising results, challenges emerged, including variability in ChatGPT-4 responses and concerns about reliability and reproducibility. The study acknowledges the influence of prior assessments from other healthcare facilities, financial disparities impacting treatment decisions, and potential oversimplification of chemotherapy regimens. Conclusions: AI-based language models like ChatGPT present intriguing possibilities for healthcare, yet further research, controlled trials, and prospective studies are imperative to comprehensively understand and enhance their applicability in personalized patient care. This research sheds light on the complexities and considerations associated with integrating AI into oncological decision-making, offering valuable insights for future advancements in this evolving field.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.