42
1
v1v2 (latest)

Learning a Gaussian Mixture for Sparsity Regularization in Inverse Problems

Main:21 Pages
5 Figures
Bibliography:2 Pages
4 Tables
Appendix:6 Pages
Abstract

In inverse problems, it is widely recognized that the incorporation of a sparsity prior yields a regularization effect on the solution. This approach is grounded on the a priori assumption that the unknown can be appropriately represented in a basis with a limited number of significant components, while most coefficients are close to zero. This occurrence is frequently observed in real-world scenarios, such as with piecewise smooth signals. In this study, we propose a probabilistic sparsity prior formulated as a mixture of degenerate Gaussians, capable of modeling sparsity with respect to a generic basis. Under this premise, we design a neural network that can be interpreted as the Bayes estimator for linear inverse problems. Additionally, we put forth both a supervised and an unsupervised training strategy to estimate the parameters of this network. To evaluate the effectiveness of our approach, we conduct a numerical comparison with commonly employed sparsity-promoting regularization techniques, namely LASSO, group LASSO, iterative hard thresholding, and sparse coding/dictionary learning. Notably, our reconstructions consistently exhibit lower mean square error values across all 11D datasets utilized for the comparisons, even in cases where the datasets significantly deviate from a Gaussian mixture model.

View on arXiv
@article{alberti2025_2401.16612,
  title={ Learning a Gaussian Mixture for Sparsity Regularization in Inverse Problems },
  author={ Giovanni S. Alberti and Luca Ratti and Matteo Santacesaria and Silvia Sciutto },
  journal={arXiv preprint arXiv:2401.16612},
  year={ 2025 }
}
Comments on this paper