ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.00692
6
5

Compressed Dictionary Learning

2 May 2018
Karin Schnass
Flávio C. A. Teixeira
ArXivPDFHTML
Abstract

In this paper we show that the computational complexity of the Iterative Thresholding and K-residual-Means (ITKrM) algorithm for dictionary learning can be significantly reduced by using dimensionality-reduction techniques based on the Johnson-Lindenstrauss lemma. The dimensionality reduction is efficiently carried out with the fast Fourier transform. We introduce the Iterative compressed-Thresholding and K-Means (IcTKM) algorithm for fast dictionary learning and study its convergence properties. We show that IcTKM can locally recover an incoherent, overcomplete generating dictionary of KKK atoms from training signals of sparsity level SSS with high probability. Fast dictionary learning is achieved by embedding the training data and the dictionary into m<dm < dm<d dimensions, and recovery is shown to be locally stable with an embedding dimension which scales as low as m=O(Slog⁡4Slog⁡3K)m = O(S \log^4 S \log^3 K)m=O(Slog4Slog3K). The compression effectively shatters the data dimension bottleneck in the computational cost of ITKrM, reducing it by a factor O(m/d)O(m/d)O(m/d). Our theoretical results are complemented with numerical simulations which demonstrate that IcTKM is a powerful, low-cost algorithm for learning dictionaries from high-dimensional data sets.

View on arXiv
Comments on this paper