ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08398
68
0

OpenRAG: Optimizing RAG End-to-End via In-Context Retrieval Learning

11 March 2025
Jiawei Zhou
Lei Chen
    3DV
    VLM
ArXivPDFHTML
Abstract

In this paper, we analyze and empirically show that the learned relevance for conventional information retrieval (IR) scenarios may be inconsistent in retrieval-augmented generation (RAG) scenarios. To bridge this gap, we introduce OpenRAG, a RAG framework that is optimized end-to-end by tuning the retriever to capture in-context relevance, enabling adaptation to the diverse and evolving needs. Extensive experiments across a wide range of tasks demonstrate that OpenRAG, by tuning a retriever end-to-end, leads to a consistent improvement of 4.0% over the original retriever, consistently outperforming existing state-of-the-art retrievers by 2.1%. Additionally, our results indicate that for some tasks, an end-to-end tuned 0.2B retriever can achieve improvements that surpass those of RAG-oriented or instruction-tuned 8B large language models (LLMs), highlighting the cost-effectiveness of our approach in enhancing RAG systems.

View on arXiv
@article{zhou2025_2503.08398,
  title={ OpenRAG: Optimizing RAG End-to-End via In-Context Retrieval Learning },
  author={ Jiawei Zhou and Lei Chen },
  journal={arXiv preprint arXiv:2503.08398},
  year={ 2025 }
}
Comments on this paper