Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Random Initialization Can't Catch Up: The Advantage of Language Model Transfer for Time Series Forecasting
0
Zitationen
8
Autoren
2025
Jahr
Abstract
Recent works have demonstrated the effectiveness of adapting pre-trained language models (LMs) for forecasting time series in the low-data regime. We build upon these findings by analyzing the effective transfer from language models to time series forecasting under various design choices including upstream post-training, time series tokenizer and language backbone size. In the low-data regime, these design choices have a significant impact on the validation loss, with clear-cut choices that outperform others. Contrary to Hernandez et al. (2021), we observe that the validation loss of the LMs continues to smoothly decrease long after the validation loss of the randomly initialized models has converged, leading to a non-vanishing transfer gap that holds across design choices. These findings not only help shed light on the effective use of compute-efficient training for time series, but also open the way for the study of modality-agnostic properties of data distributions leveraged by these models.
Ähnliche Arbeiten
Judgment under Uncertainty: Heuristics and Biases
1974 · 27.425 Zit.
Judgment under Uncertainty: Heuristics and Biases
1975 · 23.075 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Introductory Econometrics: A Modern Approach
1999 · 14.641 Zit.
A Statistical Distribution Function of Wide Applicability
1951 · 11.194 Zit.