We present differentially private (DP) algorithms for bilevel optimization, a problem class that received significant attention lately in various machine learning applications. These are the first algorithms for such problems under standard DP constraints, and are also the first to avoid Hessian computations which are prohibitive in large-scale settings. Under the well-studied setting in which the upper-level is not necessarily convex and the lower-level problem is strongly-convex, our proposed gradient-based -DP algorithm returns a point with hypergradient norm at most where is the dataset size, and are the upper/lower level dimensions. Our analysis covers constrained and unconstrained problems alike, accounts for mini-batch gradients, and applies to both empirical and population losses. As an application, we specialize our analysis to derive a simple private rule for tuning a regularization hyperparameter.
View on arXiv@article{kornowski2025_2409.19800, title={ Differentially Private Bilevel Optimization }, author={ Guy Kornowski }, journal={arXiv preprint arXiv:2409.19800}, year={ 2025 } }