In this article, we investigate the potential of multilevel approaches to accelerate the training of transformer architectures. Using an ordinary differential equation (ODE) interpretation of these architectures, we propose an appropriate way of varying the discretization of these ODE Transformers in order to accelerate the training. We validate our approach experimentally by a comparison with the standard training procedure.
View on arXiv@article{lauga2025_2504.18590, title={ A multilevel approach to accelerate the training of Transformers }, author={ Guillaume Lauga and Maël Chaumette and Edgar Desainte-Maréville and Étienne Lasalle and Arthur Lebeurrier }, journal={arXiv preprint arXiv:2504.18590}, year={ 2025 } }