ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.03889
109
0
v1v2 (latest)

Temporal horizons in forecasting: a performance-learnability trade-off

4 June 2025
Pau Vilimelis Aceituno
Jack William Miller
Noah Marti
Youssef Farag
Victor Boussange
    AI4TS
ArXiv (abs)PDFHTML
Main:9 Pages
15 Figures
Bibliography:5 Pages
Appendix:19 Pages
Abstract

When training autoregressive models for dynamical systems, a critical question arises: how far into the future should the model be trained to predict? Too short a horizon may miss long-term trends, while too long a horizon can impede convergence due to accumulating prediction errors. In this work, we formalize this trade-off by analyzing how the geometry of the loss landscape depends on the training horizon. We prove that for chaotic systems, the loss landscape's roughness grows exponentially with the training horizon, while for limit cycles, it grows linearly, making long-horizon training inherently challenging. However, we also show that models trained on long horizons generalize well to short-term forecasts, whereas those trained on short horizons suffer exponentially (resp. linearly) worse long-term predictions in chaotic (resp. periodic) systems. We validate our theory through numerical experiments and discuss practical implications for selecting training horizons. Our results provide a principled foundation for hyperparameter optimization in autoregressive forecasting models.

View on arXiv
@article{aceituno2025_2506.03889,
  title={ Temporal horizons in forecasting: a performance-learnability trade-off },
  author={ Pau Vilimelis Aceituno and Jack William Miller and Noah Marti and Youssef Farag and Victor Boussange },
  journal={arXiv preprint arXiv:2506.03889},
  year={ 2025 }
}
Comments on this paper