ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.08146
99
7
v1v2v3 (latest)

There and Back Again: A General Approach to Learning Sparse Models

25 June 2017
Vatsal Sharan
Kai Sheng Tai
Peter Bailis
Gregory Valiant
ArXiv (abs)PDFHTML
Abstract

We propose a simple and efficient approach to learning sparse models. Our approach consists of (1) projecting the data into a lower dimensional space, (2) learning a dense model in the lower dimensional space, and then (3) recovering the sparse model in the original space via compressive sensing. We apply this approach to Non-negative Matrix Factorization (NMF), tensor decomposition and linear classification---showing that it obtains 10×10\times10× compression with negligible loss in accuracy on real data, and obtains up to 5×5\times5× speedups. Our main theoretical contribution is to show the following result for NMF: if the original factors are sparse, then their projections are the sparsest solutions to the projected NMF problem. This explains why our method works for NMF and shows an interesting new property of random projections: they can preserve the solutions of non-convex optimization problems such as NMF.

View on arXiv
Comments on this paper