9
18

Understanding Sparse JL for Feature Hashing

Abstract

Feature hashing and other random projection schemes are commonly used to reduce the dimensionality of feature vectors. The goal is to efficiently project a high-dimensional feature vector living in Rn\mathbb{R}^n into a much lower-dimensional space Rm\mathbb{R}^m, while approximately preserving Euclidean norm. These schemes can be constructed using sparse random projections, for example using a sparse Johnson-Lindenstrauss (JL) transform. A line of work introduced by Weinberger et. al (ICML '09) analyzes the accuracy of sparse JL with sparsity 1 on feature vectors with small \ell_\infty-to-2\ell_2 norm ratio. Recently, Freksen, Kamma, and Larsen (NeurIPS '18) closed this line of work by proving a tight tradeoff between \ell_\infty-to-2\ell_2 norm ratio and accuracy for sparse JL with sparsity 11. In this paper, we demonstrate the benefits of using sparsity ss greater than 11 in sparse JL on feature vectors. Our main result is a tight tradeoff between \ell_\infty-to-2\ell_2 norm ratio and accuracy for a general sparsity ss, that significantly generalizes the result of Freksen et. al. Our result theoretically demonstrates that sparse JL with s>1s > 1 can have significantly better norm-preservation properties on feature vectors than sparse JL with s=1s = 1; we also empirically demonstrate this finding.

View on arXiv
Comments on this paper