Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Federated pretraining and fine tuning of BERT using clinical notes from multiple silos
2020·24 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen24
Zitationen
2
Autoren
2020
Jahr
Abstract
Large scale contextual representation models, such as BERT, have significantly advanced natural language processing (NLP) in recently years. However, in certain area like healthcare, accessing diverse large scale text data from multiple institutions is extremely challenging due to privacy and regulatory reasons. In this article, we show that it is possible to both pretrain and fine tune BERT models in a federated manner using clinical texts from different silos without moving the data.
Ähnliche Arbeiten
Autoren
Institutionen
Themen
Topic ModelingMachine Learning in HealthcareRadiomics and Machine Learning in Medical Imaging