ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.14156
25
0

Learning Firmly Nonexpansive Operators

19 July 2024
Kristian Bredies
Jonathan Chirinos-Rodriguez
E. Naldi
ArXivPDFHTML
Abstract

This paper proposes a data-driven approach for constructing firmly nonexpansive operators. We demonstrate its applicability in Plug-and-Play methods, where classical algorithms such as forward-backward splitting, Chambolle--Pock primal-dual iteration, Douglas--Rachford iteration or alternating directions method of multipliers (ADMM), are modified by replacing one proximal map by a learned firmly nonexpansive operator. We provide sound mathematical background to the problem of learning such an operator via expected and empirical risk minimization. We prove that, as the number of training points increases, the empirical risk minimization problem converges (in the sense of Gamma-convergence) to the expected risk minimization problem. Further, we derive a solution strategy that ensures firmly nonexpansive and piecewise affine operators within the convex envelope of the training set. We show that this operator converges to the best empirical solution as the number of points in the envelope increases in an appropriate sense. Finally, the experimental section details practical implementations of the method and presents an application in image denoising.

View on arXiv
Comments on this paper