Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
F-CodeLLM: A Federated Learning Framework for Adapting Large Language Models to Practical Software Development
8
Zitationen
6
Autoren
2024
Jahr
Abstract
Large Language Models (LLMs) have revolutionized code intelligence tasks, but their performance in specific software development tasks often requires fine-tuning with task-specific data. However, acquiring such data is challenging due to privacy concerns. We introduce F-CodeLLM, a novel federated learning framework for adapting LLMs to software development tasks while preserving code data privacy. Leveraging federated learning and LoRA-based efficient fine-tuning, F-CodeLLM allows organizations to collaboratively improve LLMs without sharing sensitive data. Our experiments demonstrate that F-CodeLLM achieves comparable results to centralized fine-tuning methods and excels in multi-language environments, marking a significant advancement in the application of LLMs for software engineering.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.400 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.884 Zit.
Deep Learning with Differential Privacy
2016 · 5.608 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.592 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.570 Zit.