ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.00801
60
9
v1v2 (latest)

Extensions to the Proximal Distance of Method of Constrained Optimization

2 September 2020
Alfonso Landeros
Oscar Hernan Madrid Padilla
Hua Zhou
K. Lange
ArXiv (abs)PDFHTML
Abstract

The current paper studies the problem of minimizing a loss f(x)f(\boldsymbol{x})f(x) subject to constraints of the form Dx∈S\boldsymbol{D}\boldsymbol{x} \in SDx∈S, where SSS is a closed set, convex or not, and D\boldsymbol{D}D is a fusion matrix. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of problems, we combine the Beltrami-Courant penalty method of optimization with the proximal distance principle. The latter is driven by minimization of penalized objectives f(x)+ρ2dist(Dx,S)2f(\boldsymbol{x})+\frac{\rho}{2}\text{dist}(\boldsymbol{D}\boldsymbol{x},S)^2f(x)+2ρ​dist(Dx,S)2 involving large tuning constants ρ\rhoρ and the squared Euclidean distance of Dx\boldsymbol{D}\boldsymbol{x}Dx from SSS. The next iterate xn+1\boldsymbol{x}_{n+1}xn+1​ of the corresponding proximal distance algorithm is constructed from the current iterate xn\boldsymbol{x}_nxn​ by minimizing the majorizing surrogate function f(x)+ρ2∥Dx−PS(Dxn)∥2f(\boldsymbol{x})+\frac{\rho}{2}\|\boldsymbol{D}\boldsymbol{x}-\mathcal{P}_S(\boldsymbol{D}\boldsymbol{x}_n)\|^2f(x)+2ρ​∥Dx−PS​(Dxn​)∥2. For fixed ρ\rhoρ and convex f(x)f(\boldsymbol{x})f(x) and SSS, we prove convergence, provide convergence rates, and demonstrate linear convergence under stronger assumptions. We also construct a steepest descent (SD) variant to avoid costly linear system solves. To benchmark our algorithms, we adapt the alternating direction method of multipliers (ADMM) and compare on extensive numerical tests including problems in metric projection, convex regression, convex clustering, total variation image denoising, and projection of a matrix to one that has a good condition number. Our experiments demonstrate the superior speed and acceptable accuracy of the steepest variant on high-dimensional problems. Julia code to replicate all of our experiments can be found at https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.

View on arXiv
Comments on this paper