35
0

Universal Architectures for the Learning of Polyhedral Norms and Convex Regularizers

Abstract

This paper addresses the task of learning convex regularizers to guide the reconstruction of images from limited data. By imposing that the reconstruction be amplitude-equivariant, we narrow down the class of admissible functionals to those that can be expressed as a power of a seminorm. We then show that such functionals can be approximated to arbitrary precision with the help of polyhedral norms. In particular, we identify two dual parameterizations of such systems: (i) a synthesis form with an 1\ell_1-penalty that involves some learnable dictionary; and (ii) an analysis form with an \ell_\infty-penalty that involves a trainable regularization operator. After having provided geometric insights and proved that the two forms are universal, we propose an implementation that relies on a specific architecture (tight frame with a weighted 1\ell_1 penalty) that is easy to train. We illustrate its use for denoising and the reconstruction of biomedical images. We find that the proposed framework outperforms the sparsity-based methods of compressed sensing, while it offers essentially the same convergence and robustness guarantees.

View on arXiv
@article{unser2025_2503.19190,
  title={ Universal Architectures for the Learning of Polyhedral Norms and Convex Regularizers },
  author={ Michael Unser and Stanislas Ducotterd },
  journal={arXiv preprint arXiv:2503.19190},
  year={ 2025 }
}
Comments on this paper