ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21098
63
0

Alleviating LLM-based Generative Retrieval Hallucination in Alipay Search

27 March 2025
Yedan Shen
Kaixin Wu
Yuechen Ding
Jingyuan Wen
Hong Liu
Mingjie Zhong
Zhouhan Lin
Jia Xu
Linjian Mo
    RALM
ArXivPDFHTML
Abstract

Generative retrieval (GR) has revolutionized document retrieval with the advent of large language models (LLMs), and LLM-based GR is gradually being adopted by the industry. Despite its remarkable advantages and potential, LLM-based GR suffers from hallucination and generates documents that are irrelevant to the query in some instances, severely challenging its credibility in practical applications. We thereby propose an optimized GR framework designed to alleviate retrieval hallucination, which integrates knowledge distillation reasoning in model training and incorporate decision agent to further improve retrieval precision. Specifically, we employ LLMs to assess and reason GR retrieved query-document (q-d) pairs, and then distill the reasoning data as transferred knowledge to the GR model. Moreover, we utilize a decision agent as post-processing to extend the GR retrieved documents through retrieval model and select the most relevant ones from multi perspectives as the final generative retrieval result. Extensive offline experiments on real-world datasets and online A/B tests on Fund Search and Insurance Search in Alipay demonstrate our framework's superiority and effectiveness in improving search quality and conversion gains.

View on arXiv
@article{shen2025_2503.21098,
  title={ Alleviating LLM-based Generative Retrieval Hallucination in Alipay Search },
  author={ Yedan Shen and Kaixin Wu and Yuechen Ding and Jingyuan Wen and Hong Liu and Mingjie Zhong and Zhouhan Lin and Jia Xu and Linjian Mo },
  journal={arXiv preprint arXiv:2503.21098},
  year={ 2025 }
}
Comments on this paper