Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Pretraining effective T5 generative models for clinical and biomedical applications
0
Zitationen
5
Autoren
2026
Jahr
Abstract
This paper presents a study of the impact of corpus selection and vocabulary design on the performance of T5-based language models in clinical and biomedical domains. We introduce five different T5-EHR models, each pretrained from scratch using different combinations of clinical and biomedical corpora alongside domain-specific vocabularies. We evaluated these models across a variety of clinical and biomedical tasks to quantify the impact of pretraining data and vocabulary tokenization choices on downstream performance. Our findings reveal the importance of aligning both pretraining corpus and vocabulary with the target domain. Models pretrained exclusively on clinical data achieve superior performance on clinical tasks, while adding biomedical data contributes only marginal gains in most cases, with a few exceptions. Similarly, the choice of vocabulary significantly influences model performance, with clinical-specific vocabularies outperforming general biomedical vocabularies in tasks requiring a deeper understanding of clinical language. Also, the T5 generative models perform competitively with state-of-the-art discriminative models on several biomedical benchmarks, demonstrating strong generalization to biomedical domain. Overall, these results emphasize that task-specific selection of corpus and vocabulary is essential for optimizing model performance in clinical and biomedical natural language processing (NLP).
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.594 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.861 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.426 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.921 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.496 Zit.