Orthonormal Product Quantization Network for Scalable Face Image
Retrieval
- CVBM
Current deep quantization methods that produce binary code representations for efficient image retrieval mostly learn codewords from data. They rarely investigate the effect of their inherent distribution on the quantization and the learning metrics presently used are insufficient for face image retrieval task. To address this, this paper integrates product quantization into an end-to-end deep learning framework to retrieve face images. We propose a novel scheme that uses predefined orthonormal vectors as codewords, to enhance the quantization informativeness and reduce redundancy in the codewords. A tailored loss function maximizes discriminability among identities in each quantization subspace for both the quantized and original features. An entropy-based regularization term is imposed to reduce the quantization error. Experiments were conducted on three commonly-used datasets for both single- and cross-domain retrieval. The proposed method outperformed all the deep hashing/quantization methods it was compared with under both settings. We observe that the proposed orthonormal codewords consistently improved both models' standard retrieval performance and generalization ability. Therefore, the proposed method is more suitable for scalable face image retrieval than deep hashing methods.
View on arXiv