Efficient Model Selection for Time Series Forecasting via LLMs

Model selection is a critical step in time series forecasting, traditionally requiring extensive performance evaluations across various datasets. Meta-learning approaches aim to automate this process, but they typically depend on pre-constructed performance matrices, which are costly to build. In this work, we propose to leverage Large Language Models (LLMs) as a lightweight alternative for model selection. Our method eliminates the need for explicit performance matrices by utilizing the inherent knowledge and reasoning capabilities of LLMs. Through extensive experiments with LLaMA, GPT and Gemini, we demonstrate that our approach outperforms traditional meta-learning techniques and heuristic baselines, while significantly reducing computational overhead. These findings underscore the potential of LLMs in efficient model selection for time series forecasting.
View on arXiv@article{wei2025_2504.02119, title={ Efficient Model Selection for Time Series Forecasting via LLMs }, author={ Wang Wei and Tiankai Yang and Hongjie Chen and Ryan A. Rossi and Yue Zhao and Franck Dernoncourt and Hoda Eldardiry }, journal={arXiv preprint arXiv:2504.02119}, year={ 2025 } }