OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 26.04.2026, 02:20

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Llama 3 Meets MoE: Efficient Upcycling

2024·0 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2024

Jahr

Abstract

Scaling large language models (LLMs) significantly improves performance but\ncomes with prohibitive computational costs. Mixture-of-Experts (MoE) models\noffer an efficient alternative, increasing capacity without a proportional rise\nin compute requirements. However, training MoE models from scratch poses\nchallenges like overfitting and routing instability. We present an efficient\ntraining recipe leveraging pre-trained dense checkpoints, training an 8-Expert\nTop-2 MoE model from Llama 3-8B with less than $1\\%$ of typical pre-training\ncompute. Our approach enhances downstream performance on academic benchmarks,\nachieving a $\\textbf{2%}$ improvement in 0-shot accuracy on MMLU, while\nreaching a Model FLOPs Utilization (MFU) of $\\textbf{46.8%}$ during training\nusing our framework. We also integrate online upcycling in NeMo for seamless\nuse of pre-trained weights, enabling cost-effective development of\nhigh-capacity MoE models.\n

Ähnliche Arbeiten

Autoren

Themen

Topic ModelingMultimodal Machine Learning ApplicationsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen