Variational Learning Finds Flatter Solutions at the Edge of Stability
- MLT

Variational Learning (VL) has recently gained popularity for training deep neural networks and is competitive to standard learning methods. Part of its empirical success can be explained by theories such as PAC-Bayes bounds, minimum description length and marginal likelihood, but there are few tools to unravel the implicit regularization in play. Here, we analyze the implicit regularization of VL through the Edge of Stability (EoS) framework. EoS has previously been used to show that gradient descent can find flat solutions and we extend this result to VL to show that it can find even flatter solutions. This is obtained by controlling the posterior covariance and the number of Monte Carlo samples from the posterior. These results are derived in a similar fashion as the standard EoS literature for deep learning, by first deriving a result for a quadratic problem and then extending it to deep neural networks. We empirically validate these findings on a wide variety of large networks, such as ResNet and ViT, to find that the theoretical results closely match the empirical ones. Ours is the first work to analyze the EoS dynamics in VL.
View on arXiv@article{ghosh2025_2506.12903, title={ Variational Learning Finds Flatter Solutions at the Edge of Stability }, author={ Avrajit Ghosh and Bai Cong and Rio Yokota and Saiprasad Ravishankar and Rongrong Wang and Molei Tao and Mohammad Emtiyaz Khan and Thomas Möllenhoff }, journal={arXiv preprint arXiv:2506.12903}, year={ 2025 } }