14
0

A stochastic gradient method for trilevel optimization

Abstract

With the success that the field of bilevel optimization has seen in recent years, similar methodologies have started being applied to solving more difficult applications that arise in trilevel optimization. At the helm of these applications are new machine learning formulations that have been proposed in the trilevel context and, as a result, efficient and theoretically sound stochastic methods are required. In this work, we propose the first-ever stochastic gradient descent method for solving unconstrained trilevel optimization problems and provide a convergence theory that covers all forms of inexactness of the trilevel adjoint gradient, such as the inexact solutions of the middle-level and lower-level problems, inexact computation of the trilevel adjoint formula, and noisy estimates of the gradients, Hessians, Jacobians, and tensors of third-order derivatives involved. We also demonstrate the promise of our approach by providing numerical results on both synthetic trilevel problems and trilevel formulations for hyperparameter adversarial tuning.

View on arXiv
@article{giovannelli2025_2505.06805,
  title={ A stochastic gradient method for trilevel optimization },
  author={ Tommaso Giovannelli and Griffin Dean Kent and Luis Nunes Vicente },
  journal={arXiv preprint arXiv:2505.06805},
  year={ 2025 }
}
Comments on this paper