321

Learning Product Codebooks using Vector Quantized Autoencoders for Image Retrieval

Abstract

Vector-Quantized Variational Autoencoders (VQ-VAE) provide an unsupervised model for learning discrete representations by combining vector quantization and autoencoders. In this paper, we incorporate the product quantization into the bottleneck stage of VQ- VAE and propose an end-to-end unsupervised learning model for the image retrieval tasks. The product quantizer has the advantage of generating large-size codebooks. Fast retrieval can be achieved by using the lookup tables that store the distance between every two sub-codewords. We also propose that an embedded bottleneck quantizer can be used as a regularizer that forces the output of the encoder to share a constrained coding space. This is critical to applications such as image retrieval that require the learned latent features to preserve the similarity relations of the data space. Furthermore, we describe the VQ-VAE in the context of an information-theoretic framework. We show that the loss function of the original VQ-VAE can be derived from the so-called variational deterministic information bottleneck (VDIB) principle.

View on arXiv
Comments on this paper