366
v1v2 (latest)

Faster Binary Embeddings for Preserving Euclidean Distances

International Conference on Learning Representations (ICLR), 2020
Abstract

We propose a fast, distance-preserving, binary embedding algorithm to transform a high-dimensional dataset TRn\mathcal{T}\subseteq\mathbb{R}^n into binary sequences in the cube {±1}m\{\pm 1\}^m. When T\mathcal{T} consists of well-spread (i.e., non-sparse) vectors, our embedding method applies a stable noise-shaping quantization scheme to AxA x where ARm×nA\in\mathbb{R}^{m\times n} is a sparse Gaussian random matrix. This contrasts with most binary embedding methods, which usually use xsign(Ax)x\mapsto \mathrm{sign}(Ax) for the embedding. Moreover, we show that Euclidean distances among the elements of T\mathcal{T} are approximated by the 1\ell_1 norm on the images of {±1}m\{\pm 1\}^m under a fast linear transformation. This again contrasts with standard methods, where the Hamming distance is used instead. Our method is both fast and memory efficient, with time complexity O(m)O(m) and space complexity O(m)O(m). Further, we prove that the method is accurate and its associated error is comparable to that of a continuous valued Johnson-Lindenstrauss embedding plus a quantization error that admits a polynomial decay as the embedding dimension mm increases. Thus the length of the binary codes required to achieve a desired accuracy is quite small, and we show it can even be compressed further without compromising the accuracy. To illustrate our results, we test the proposed method on natural images and show that it achieves strong performance.

View on arXiv
Comments on this paper