19
0

Optimization of embeddings storage for RAG systems using quantization and dimensionality reduction techniques

Abstract

Retrieval-Augmented Generation enhances language models by retrieving relevant information from external knowledge bases, relying on high-dimensional vector embeddings typically stored in float32 precision. However, storing these embeddings at scale presents significant memory challenges. To address this issue, we systematically investigate on MTEB benchmark two complementary optimization strategies: quantization, evaluating standard formats (float16, int8, binary) and low-bit floating-point types (float8), and dimensionality reduction, assessing methods like PCA, Kernel PCA, UMAP, Random Projections and Autoencoders. Our results show that float8 quantization achieves a 4x storage reduction with minimal performance degradation (<0.3%), significantly outperforming int8 quantization at the same compression level, being simpler to implement. PCA emerges as the most effective dimensionality reduction technique. Crucially, combining moderate PCA (e.g., retaining 50% dimensions) with float8 quantization offers an excellent trade-off, achieving 8x total compression with less performance impact than using int8 alone (which provides only 4x compression). To facilitate practical application, we propose a methodology based on visualizing the performance-storage trade-off space to identify the optimal configuration that maximizes performance within their specific memory constraints.

View on arXiv
@article{huerga-pérez2025_2505.00105,
  title={ Optimization of embeddings storage for RAG systems using quantization and dimensionality reduction techniques },
  author={ Naamán Huerga-Pérez and Rubén Álvarez and Rubén Ferrero-Guillén and Alberto Martínez-Gutiérrez and Javier Díez-González },
  journal={arXiv preprint arXiv:2505.00105},
  year={ 2025 }
}
Comments on this paper