Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
EYE-Llama, an in-domain large language model for ophthalmology
7
Zitationen
9
Autoren
2024
Jahr
Abstract
Background: Training Large Language Models (LLMs) with in-domain data can significantly enhance their performance, leading to more accurate and reliable question-answering (Q&A) systems essential for supporting clinical decision-making and educating patients. Methods: This study introduces ophthalmic LLMs trained on in-domain, well-curated datasets. We present an open-source substantial ophthalmic language dataset for model training. Our models (EYE-Llama), were pre-trained on an ophthalmology-specific dataset, including paper abstracts, textbooks, and Wikipedia articles. Subsequently, the models underwent fine-tuning using a diverse range of QA pairs. Our models were compared to baseline Llama 2, ChatDoctor, Meditron, Llama 3, and ChatGPT (GPT3·5) models, using four distinct test sets, and evaluated quantitatively (Accuracy, F1 score, BERTScore, BARTScore and BLEU score) and qualitatively by two ophthalmologists. Findings: Upon evaluating the models using the synthetic dialogue test set with three different metrics (BERTScore, BARTScore, and BLEU score), our models demonstrated superior performance. Specifically, when evaluated using BERTScore, our models surpassed Llama 2, Llama 3, Meditron, and ChatDoctor in terms of F1 score, and performed on par with ChatGPT, which was trained with 175 billion parameters (EYE-Llama: 0.57, Llama 2: 0.56, Llama 3: 0.55, Meditron: 0.50, ChatDoctor: 0.56, and ChatGPT: 0.57). Additionally, the EYE-Llama model outperformed the above models when evaluated using BARTScore and BLEU scores. When tested on the MedMCQA test set, the fine-tuned models exhibited higher accuracy compared to Llama 2, Meditron, and ChatDoctor models (EYE-Llama: 0.39, Llama 2: 0.33, ChatDoctor: 0.29, Meditron: 0.22). However, ChatGPT, and Llama 3 models outperformed EYE-Llama, achieving accuracies of 0.55, 0.78, and 0.90, respectively. On the PubmedQA test set, our model showed improved accuracy over all other models (EYE-Llama: 0.96, Llama 2: 0.90, Llama 3: 0.92, Meditron: 0.76, ChatGPT: 0.93, ChatDoctor: 0.92). Interpretation: The study shows that pre-training and fine-tuning LLMs like EYE-Llama enhances their performance in specific medical domains. Our EYE-Llama models surpass baseline Llama 2 in all evaluations, highlighting the effectiveness of specialized LLMs in medical QA systems. Funding: Funded by NEI R15EY035804 (MNA), R21EY035271 (MNA), and UNC Charlotte Faculty Research Grant (MNA).
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.611 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.504 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.025 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.