ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.09461
17
4

Fast Learning with Nonconvex L1-2 Regularization

29 October 2016
Quanming Yao
James T. Kwok
Xiawei Guo
ArXivPDFHTML
Abstract

Convex regularizers are often used for sparse learning. They are easy to optimize, but can lead to inferior prediction performance. The difference of ℓ1\ell_1ℓ1​ and ℓ2\ell_2ℓ2​ (ℓ1−2\ell_{1-2}ℓ1−2​) regularizer has been recently proposed as a nonconvex regularizer. It yields better recovery than both ℓ0\ell_0ℓ0​ and ℓ1\ell_1ℓ1​ regularizers on compressed sensing. However, how to efficiently optimize its learning problem is still challenging. The main difficulty is that both the ℓ1\ell_1ℓ1​ and ℓ2\ell_2ℓ2​ norms in ℓ1−2\ell_{1-2}ℓ1−2​ are not differentiable, and existing optimization algorithms cannot be applied. In this paper, we show that a closed-form solution can be derived for the proximal step associated with this regularizer. We further extend the result for low-rank matrix learning and the total variation model. Experiments on both synthetic and real data sets show that the resultant accelerated proximal gradient algorithm is more efficient than other noncovex optimization algorithms.

View on arXiv
Comments on this paper