OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 18:57

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating fine-tuned GPT models on different datasets in the healthcare domain

2025·1 Zitationen·Innovation and Emerging TechnologiesOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2025

Jahr

Abstract

This study investigates the performance of fine-tuned generative pre-trained transformers (GPT) on different healthcare datasets to enhance public health literacy. The background of this study is rooted in the recognition of the critical role health literacy plays in fostering public awareness and understanding of medical information. Against this backdrop, the objective is to explore domain-specific GPT that enhance accessibility to comprehensive health information. This study fine-tunes the GPT model across different types of datasets, which are PubMed, Medical Information Mart for Intensive Care III (MIMIC-III), MedQA, MedMCQA, and consultation datasets. The models are evaluated using Massive Multitask Language Understanding and Massive Multi-discipline Multimodal Understanding Benchmarks. Results showed that fine-tuning language models on domain-specific datasets, especially in healthcare, substantially improves their performance. In conclusion, this research highlights the potential of domain-specific GPT in the modern healthcare landscape.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Machine Learning in HealthcareArtificial Intelligence in Healthcare and EducationArtificial Intelligence in Healthcare
Volltext beim Verlag öffnen