Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating Large Language Model (LLM) Performance on Established Breast Classification Systems
25
Zitationen
7
Autoren
2024
Jahr
Abstract
Medical researchers are increasingly utilizing advanced LLMs like ChatGPT-4 and Gemini to enhance diagnostic processes in the medical field. This research focuses on their ability to comprehend and apply complex medical classification systems for breast conditions, which can significantly aid plastic surgeons in making informed decisions for diagnosis and treatment, ultimately leading to improved patient outcomes. Fifty clinical scenarios were created to evaluate the classification accuracy of each LLM across five established breast-related classification systems. Scores from 0 to 2 were assigned to LLM responses to denote incorrect, partially correct, or completely correct classifications. Descriptive statistics were employed to compare the performances of ChatGPT-4 and Gemini. Gemini exhibited superior overall performance, achieving 98% accuracy compared to ChatGPT-4's 71%. While both models performed well in the Baker classification for capsular contracture and UTSW classification for gynecomastia, Gemini consistently outperformed ChatGPT-4 in other systems, such as the Fischer Grade Classification for gender-affirming mastectomy, Kajava Classification for ectopic breast tissue, and Regnault Classification for breast ptosis. With further development, integrating LLMs into plastic surgery practice will likely enhance diagnostic support and decision making.
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 13.500 Zit.
Dermatologist-level classification of skin cancer with deep neural networks
2017 · 13.129 Zit.
A survey on Image Data Augmentation for Deep Learning
2019 · 11.731 Zit.
QuPath: Open source software for digital pathology image analysis
2017 · 8.101 Zit.
Radiomics: Images Are More than Pictures, They Are Data
2015 · 7.981 Zit.