ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1709.01230
17
3

On the Suboptimality of Proximal Gradient Descent for ℓ0\ell^{0}ℓ0 Sparse Approximation

5 September 2017
Yingzhen Yang
Jiashi Feng
Nebojsa Jojic
Jianchao Yang
Thomas S. Huang
ArXiv (abs)PDFHTML
Abstract

We study the proximal gradient descent (PGD) method for ℓ0\ell^{0}ℓ0 sparse approximation problem as well as its accelerated optimization with randomized algorithms in this paper. We first offer theoretical analysis of PGD showing the bounded gap between the sub-optimal solution by PGD and the globally optimal solution for the ℓ0\ell^{0}ℓ0 sparse approximation problem under conditions weaker than Restricted Isometry Property widely used in compressive sensing literature. Moreover, we propose randomized algorithms to accelerate the optimization by PGD using randomized low rank matrix approximation (PGD-RMA) and randomized dimension reduction (PGD-RDR). Our randomized algorithms substantially reduces the computation cost of the original PGD for the ℓ0\ell^{0}ℓ0 sparse approximation problem, and the resultant sub-optimal solution still enjoys provable suboptimality, namely, the sub-optimal solution to the reduced problem still has bounded gap to the globally optimal solution to the original problem.

View on arXiv
Comments on this paper