ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10832
41
0

Dual Codebook VQ: Enhanced Image Reconstruction with Reduced Codebook Size

13 March 2025
Parisa Boodaghi Malidarreh
Jillur Rahman Saurav
T. Pham
Amir Hajighasemi
Anahita Samadi
Saurabh Shrinivas Maydeo
M. Nasr
Jacob M. Luber
ArXivPDFHTML
Abstract

Vector Quantization (VQ) techniques face significant challenges in codebook utilization, limiting reconstruction fidelity in image modeling. We introduce a Dual Codebook mechanism that effectively addresses this limitation by partitioning the representation into complementary global and local components. The global codebook employs a lightweight transformer for concurrent updates of all code vectors, while the local codebook maintains precise feature representation through deterministic selection. This complementary approach is trained from scratch without requiring pre-trained knowledge. Experimental evaluation across multiple standard benchmark datasets demonstrates state-of-the-art reconstruction quality while using a compact codebook of size 512 - half the size of previous methods that require pre-training. Our approach achieves significant FID improvements across diverse image domains, particularly excelling in scene and face reconstruction tasks. These results establish Dual Codebook VQ as an efficient paradigm for high-fidelity image reconstruction with significantly reduced computational requirements.

View on arXiv
@article{malidarreh2025_2503.10832,
  title={ Dual Codebook VQ: Enhanced Image Reconstruction with Reduced Codebook Size },
  author={ Parisa Boodaghi Malidarreh and Jillur Rahman Saurav and Thuong Le Hoai Pham and Amir Hajighasemi and Anahita Samadi and Saurabh Shrinivas Maydeo and Mohammad Sadegh Nasr and Jacob M. Luber },
  journal={arXiv preprint arXiv:2503.10832},
  year={ 2025 }
}
Comments on this paper