24
0

OSCAR: Online Soft Compression And Reranking

Abstract

Retrieval-Augmented Generation (RAG) enhances Large Language Models (LLMs) by integrating external knowledge, leading to improved accuracy and relevance. However, scaling RAG pipelines remains computationally expensive as retrieval sizes grow. To address this, we introduce OSCAR, a novel query-dependent online soft compression method that reduces computational overhead while preserving performance. Unlike traditional hard compression methods, which shorten retrieved texts, or soft compression approaches, which map documents to continuous embeddings offline, OSCAR dynamically compresses retrieved information at inference time, eliminating storage overhead and enabling higher compression rates. Additionally, we extend OSCAR to simultaneously perform reranking, further optimizing the efficiency of the RAG pipeline. Our experiments demonstrate state-of-the-art performance with a 2-5x speed-up in inference and minimal to no loss in accuracy for LLMs ranging from 1B to 24B parameters. The models are available at:this https URL.

View on arXiv
@article{louis2025_2504.07109,
  title={ OSCAR: Online Soft Compression And Reranking },
  author={ Maxime Louis and Thibault Formal and Hervé Dejean and Stéphane Clinchant },
  journal={arXiv preprint arXiv:2504.07109},
  year={ 2025 }
}
Comments on this paper