Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating fine-tuned GPT models on different datasets in the healthcare domain
1
Zitationen
3
Autoren
2025
Jahr
Abstract
This study investigates the performance of fine-tuned generative pre-trained transformers (GPT) on different healthcare datasets to enhance public health literacy. The background of this study is rooted in the recognition of the critical role health literacy plays in fostering public awareness and understanding of medical information. Against this backdrop, the objective is to explore domain-specific GPT that enhance accessibility to comprehensive health information. This study fine-tunes the GPT model across different types of datasets, which are PubMed, Medical Information Mart for Intensive Care III (MIMIC-III), MedQA, MedMCQA, and consultation datasets. The models are evaluated using Massive Multitask Language Understanding and Massive Multi-discipline Multimodal Understanding Benchmarks. Results showed that fine-tuning language models on domain-specific datasets, especially in healthcare, substantially improves their performance. In conclusion, this research highlights the potential of domain-specific GPT in the modern healthcare landscape.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.227 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.601 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.387 Zit.