OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.04.2026, 09:21

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Pretraining effective T5 generative models for clinical and biomedical applications

2026·0 Zitationen·PLoS ONEOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

This paper presents a study of the impact of corpus selection and vocabulary design on the performance of T5-based language models in clinical and biomedical domains. We introduce five different T5-EHR models, each pretrained from scratch using different combinations of clinical and biomedical corpora alongside domain-specific vocabularies. We evaluated these models across a variety of clinical and biomedical tasks to quantify the impact of pretraining data and vocabulary tokenization choices on downstream performance. Our findings reveal the importance of aligning both pretraining corpus and vocabulary with the target domain. Models pretrained exclusively on clinical data achieve superior performance on clinical tasks, while adding biomedical data contributes only marginal gains in most cases, with a few exceptions. Similarly, the choice of vocabulary significantly influences model performance, with clinical-specific vocabularies outperforming general biomedical vocabularies in tasks requiring a deeper understanding of clinical language. Also, the T5 generative models perform competitively with state-of-the-art discriminative models on several biomedical benchmarks, demonstrating strong generalization to biomedical domain. Overall, these results emphasize that task-specific selection of corpus and vocabulary is essential for optimizing model performance in clinical and biomedical natural language processing (NLP).

Ähnliche Arbeiten