OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 01:14

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A BF16 FMA is All You Need for DNN Training

2022·15 Zitationen·IEEE Transactions on Emerging Topics in ComputingOpen Access
Volltext beim Verlag öffnen

15

Zitationen

5

Autoren

2022

Jahr

Abstract

Fused Multiply-Add (FMA) functional units constitute a fundamental hardware component to train Deep Neural Networks (DNNs). Its silicon area grows quadratically with the mantissa bit count of the computer number format, which has motivated the adoption of the BrainFloat16 format (BF16). BF16 features 1 sign, 8 exponent and 7 explicit mantissa bits. Some approaches to train DNNs achieve significant performance benefits by using the BF16 format. However, these approaches must combine BF16 with the standard IEEE 754 Floating-Point 32-bit (FP32) format to achieve state-of-the-art training accuracy, which limits the impact of adopting BF16. This article proposes the first approach able to train complex DNNs entirely using the BF16 format. We propose a new class of FMA operators, <inline-formula><tex-math notation="LaTeX">$\mathrm{FMA}^{\mathrm {bf}16}_{\mathrm{n}\_\mathrm{m}}$</tex-math></inline-formula> , that entirely rely on BF16 FMA hardware instructions and deliver the same accuracy as FP32. <inline-formula><tex-math notation="LaTeX">$\mathrm{FMA}^{\mathrm {bf}16}_{\mathrm{n}\_\mathrm{m}}$</tex-math></inline-formula> operators achieve performance improvements within the 1.28-1.35× range on ResNet101 with respect to FP32. <inline-formula><tex-math notation="LaTeX">$\mathrm{FMA}^{\mathrm {bf}16}_{\mathrm{n}\_\mathrm{m}}$</tex-math></inline-formula> enables training complex DNNs on simple low-end hardware devices without requiring expensive FP32 FMA functional units.

Ähnliche Arbeiten