Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparative Analysis of GPT-4Vision, GPT-4 and Open Source LLMs in Clinical Diagnostic Accuracy: A Benchmark Against Human Expertise
14
Zitationen
7
Autoren
2023
Jahr
Abstract
Abstract Importance Medicine is poised for transformation with artificial general intelligence becoming integral to almost all clinical environments. Currently, the performance of multimodal AI, specifically one powered by GPT-4, in real clinical cases remains uncharted. Objective To ascertain whether GPT-4V can consistently comprehend complex diagnostic scenarios through both imagery and textual data. Design A selection of 140 clinical cases from the JAMA Clinical Challenge and 348 from the NEJM Image Challenge were used. Each case, comprising a clinical image and corresponding question, was processed by GPT-4V, and responses were documented. The significance of imaging information was assessed by comparing GPT-4V’s performance with that of four other leading-edge large language models (LLMs). Main Outcomes and Measures The accuracy of responses was gauged by juxtaposing the model’s answers with the established ground truths of the challenges. The confidence interval for the model’s performance was calculated using bootstrapping methods. Additionally, human performance on the NEJM Image Challenge was chronicled, reflected by the choice percentage selected by challenge participants. Results GPT-4V demonstrated superior accuracy in analyses of both sources, achieving 73.3% for JAMA and 88.7% for NEJM, notably outperforming text- only LLMs such as GPT-4, GPT-3.5, Llama2, and Med-42. Remarkably, both GPT-4V and GPT-4 exceeded average human participants’ performance at all complexity levels within the NEJM Image Challenge. Conclusions and Relevance GPT-4V has exhibited considerable promise in clinical diagnostic tasks, surpassing the capabilities of its predecessors as well as those of human experts. However, while its proficiency in identification tasks is commendable, it requires further refinement in decision-making and strategic planning. Despite these encouraging results, such models should be adopted with prudence in clinical settings, serving to augment rather than replace human discretion. Continual research is imperative to fully evaluate the potential impact on patient care.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.