OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 08:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Federated Instruction Tuning with DeepSeek: Towards Scalable and Private LLM Adaptation

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Recent advances in large language models (LLMs) have enabled a wide range of AI-driven applications, yet their deployment in sensitive domains such as healthcare remains constrained by stringent data privacy requirements. To address this challenge, we adopt a federated learning framework for cross-institutional collaborative training of the DeepSeek-R1-Distill-Qwen-1.5B model, thereby avoiding centralized data aggregation. In this setup, each institution trains the model locally on its data, and updates are aggregated using Federated Averaging (FedAvg) to construct a global model while ensuring patient information never leaves its source. Experiments conducted on the medical-o1-reasoning-SFT dataset demonstrate that our federated approach achieves 56.9% accuracy, closely matching the performance of centralized training while preserving data confidentiality. Furthermore, the model maintains stable performance under a non-IID manual split with per-client subsampling, underscoring the robustness of our method.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Privacy-Preserving Technologies in DataMachine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen