Implicit Regularization of the Deep Inverse Prior Trained with Inertia

Solving inverse problems with neural networks benefits from very few theoretical guarantees when it comes to the recovery guarantees. We provide in this work convergence and recovery guarantees for self-supervised neural networks applied to inverse problems, such as Deep Image/Inverse Prior, and trained with inertia featuring both viscous and geometric Hessian-driven dampings. We study both the continuous-time case, i.e., the trajectory of a dynamical system, and the discrete case leading to an inertial algorithm with an adaptive step-size. We show in the continuous-time case that the network can be trained with an optimal accelerated exponential convergence rate compared to the rate obtained with gradient flow. We also show that training a network with our inertial algorithm enjoys similar recovery guarantees though with a less sharp linear convergence rate.
View on arXiv@article{buskulic2025_2506.02986, title={ Implicit Regularization of the Deep Inverse Prior Trained with Inertia }, author={ Nathan Buskulic and Jalal Fadil and Yvain Quéau }, journal={arXiv preprint arXiv:2506.02986}, year={ 2025 } }