OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 22:10

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

MPCTF: A Multi-Party Collaborative Training Framework for Large Language Models

2025·0 Zitationen·ElectronicsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

The demand for high-quality private data in large language models is growing significantly. However, private data is often scattered across different entities, leading to significant data silo issues. To alleviate such problems, we propose a novel multi-party collaborative training framework for large language models, named MPCTF. MPCTF consists of several components to achieve multi-party collaborative training: (1) a one-click launch mechanism with multi-node and multi-GPU training capabilities, significantly simplifying user operations while enhancing automation and optimizing the collaborative training workflow; (2) four data partitioning strategies for splitting client datasets during the training process, namely fixed-size strategy, percentage-based strategy, maximum data volume strategy, and total data volume and available GPU memory strategy; (3) multiple aggregation strategies; and (4) multiple privacy protection strategies to achieve privacy protection. We conducted extensive experiments to validate the effectiveness of the proposed MPCTF. The experimental results demonstrate that the proposed MPCTF achieves superior performance; for example, our MPCTF acquired an accuracy rate of 65.43 and outperformed the existing work, which acquired an accuracy rate of 14.25 in the experiments. Moreover, we hope that our proposed MPCTF can promote the development of collaborative training for large language models.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationTopic ModelingRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen