ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.03634
164
28
v1v2v3v4 (latest)

Alternating minimization for dictionary learning: Local Convergence Guarantees

9 November 2017
Niladri S. Chatterji
Peter L. Bartlett
ArXiv (abs)PDFHTML
Abstract

We present theoretical guarantees for an alternating minimization algorithm for the dictionary learning/sparse coding problem. The dictionary learning problem is to factorize vector samples y1,y2,…,yny^{1},y^{2},\ldots, y^{n}y1,y2,…,yn into an appropriate basis (dictionary) A∗A^*A∗ and sparse vectors x1∗,…,xn∗x^{1*},\ldots,x^{n*}x1∗,…,xn∗. Our algorithm is a simple alternating minimization procedure that switches between ℓ1\ell_1ℓ1​ minimization and gradient descent in alternate steps. Dictionary learning and specifically alternating minimization algorithms for dictionary learning are well studied both theoretically and empirically. However, in contrast to previous theoretical analyses for this problem, we replace a condition on the operator norm (that is, the largest magnitude singular value) of the true underlying dictionary A∗A^*A∗ with a condition on the matrix infinity norm (that is, the largest magnitude term). Our guarantees are under a reasonable generative model that allows for dictionaries with growing operator norms, and can handle an arbitrary level of overcompleteness, while having sparsity that is information theoretically optimal. We also establish upper bounds on the sample complexity of our algorithm.

View on arXiv
Comments on this paper