ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.08444
51
1

W-RAG: Weakly Supervised Dense Retrieval in RAG for Open-domain Question Answering

15 August 2024
Jinming Nian
Zhiyuan Peng
Qifan Wang
Yi Fang
    RALM
ArXivPDFHTML
Abstract

In knowledge-intensive tasks such as open-domain question answering (OpenQA), large language models (LLMs) often struggle to generate factual answers, relying solely on their internal (parametric) knowledge. To address this limitation, Retrieval-Augmented Generation (RAG) systems enhance LLMs by retrieving relevant information from external sources, thereby positioning the retriever as a pivotal component. Although dense retrieval demonstrates state-of-the-art performance, its training poses challenges due to the scarcity of ground-truth evidence, largely attributed to the high costs of human annotation. In this paper, we propose W-RAG, a method that draws weak training signals from the downstream task (such as OpenQA) of an LLM, and fine-tunes the retriever to prioritize passages that most benefit the task. Specifically, we rerank the top-kkk passages retrieved via BM25 by assessing the probability that the LLM will generate the correct answer for a question given each passage. The highest-ranking passages are then used as positive fine-tuning examples for dense retrieval. We conduct comprehensive experiments across four publicly available OpenQA datasets to demonstrate that our approach enhances both retrieval and OpenQA performance compared to baseline models, achieving results comparable to models fine-tuned with human-labeled data.

View on arXiv
@article{nian2025_2408.08444,
  title={ W-RAG: Weakly Supervised Dense Retrieval in RAG for Open-domain Question Answering },
  author={ Jinming Nian and Zhiyuan Peng and Qifan Wang and Yi Fang },
  journal={arXiv preprint arXiv:2408.08444},
  year={ 2025 }
}
Comments on this paper