24
0

Multi-level projection with exponential parallel speedup; Application to sparse auto-encoders neural networks

Abstract

The 1,\ell_{1,\infty} norm is an efficient structured projection but the complexity of the best algorithm is unfortunately O(nmlog(nm))\mathcal{O}\big(n m \log(n m)\big) for a matrix in Rn×m\mathbb{R}^{n\times m}. In this paper, we propose a new bi-level projection method for which we show that the time complexity for the 1,\ell_{1,\infty} norm is only O(nm)\mathcal{O}\big(n m \big) for a matrix in Rn×m\mathbb{R}^{n\times m}, and O(n+m)\mathcal{O}\big(n + m \big) with full parallel power. We generalize our method to tensors and we propose a new multi-level projection, having an induced decomposition that yields a linear parallel speedup up to an exponential speedup factor, resulting in a time complexity lower-bounded by the sum of the dimensions, instead of the product of the dimensions. we provide a large base of implementation of our framework for bi-level and tri-level (matrices and tensors) for various norms and provides also the parallel implementation. Experiments show that our projection is 22 times faster than the actual fastest Euclidean algorithms while providing same accuracy and better sparsity in neural networks applications.

View on arXiv
Comments on this paper