Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
An in-depth evaluation of federated learning on biomedical natural language processing for information extraction
34
Zitationen
7
Autoren
2024
Jahr
Abstract
Language models (LMs) such as BERT and GPT have revolutionized natural language processing (NLP). However, the medical field faces challenges in training LMs due to limited data access and privacy constraints imposed by regulations like the Health Insurance Portability and Accountability Act (HIPPA) and the General Data Protection Regulation (GDPR). Federated learning (FL) offers a decentralized solution that enables collaborative learning while ensuring data privacy. In this study, we evaluated FL on 2 biomedical NLP tasks encompassing 8 corpora using 6 LMs. Our results show that: (1) FL models consistently outperformed models trained on individual clients' data and sometimes performed comparably with models trained with polled data; (2) with the fixed number of total data, FL models training with more clients produced inferior performance but pre-trained transformer-based models exhibited great resilience. (3) FL models significantly outperformed pre-trained LLMs with few-shot prompting.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.390 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.866 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.590 Zit.
Deep Learning with Differential Privacy
2016 · 5.572 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.558 Zit.