OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.03.2026, 20:38

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

F-CodeLLM: A Federated Learning Framework for Adapting Large Language Models to Practical Software Development

2024·8 ZitationenOpen Access
Volltext beim Verlag öffnen

8

Zitationen

6

Autoren

2024

Jahr

Abstract

Large Language Models (LLMs) have revolutionized code intelligence tasks, but their performance in specific software development tasks often requires fine-tuning with task-specific data. However, acquiring such data is challenging due to privacy concerns. We introduce F-CodeLLM, a novel federated learning framework for adapting LLMs to software development tasks while preserving code data privacy. Leveraging federated learning and LoRA-based efficient fine-tuning, F-CodeLLM allows organizations to collaboratively improve LLMs without sharing sensitive data. Our experiments demonstrate that F-CodeLLM achieves comparable results to centralized fine-tuning methods and excels in multi-language environments, marking a significant advancement in the application of LLMs for software engineering.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Privacy-Preserving Technologies in DataSoftware Engineering ResearchArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen