ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04830
49
1

Cite Before You Speak: Enhancing Context-Response Grounding in E-commerce Conversational LLM-Agents

5 March 2025
Jingying Zeng
Hui Liu
Zhenwei Dai
X. Tang
Chen Luo
Samarth Varshney
Zhen Li
Qi He
    HILM
ArXivPDFHTML
Abstract

With the advancement of conversational large language models (LLMs), several LLM-based Conversational Shopping Agents (CSA) have been developed to help customers smooth their online shopping. The primary objective in building an engaging and trustworthy CSA is to ensure the agent's responses about product factoids are accurate and factually grounded. However, two challenges remain. First, LLMs produce hallucinated or unsupported claims. Such inaccuracies risk spreading misinformation and diminishing customer trust. Second, without providing knowledge source attribution in CSA response, customers struggle to verify LLM-generated information. To address both challenges, we present an easily productionized solution that enables a ''citation experience'' to our customers. We build auto-evaluation metrics to holistically evaluate LLM's grounding and attribution capabilities, suggesting that citation generation paradigm substantially improves grounding performance by 13.83%. To deploy this capability at scale, we introduce Multi-UX-Inference system, which appends source citations to LLM outputs while preserving existing user experience features and supporting scalable inference. Large-scale online A/B tests show that grounded CSA responses improves customer engagement by 3% - 10%, depending on UX variations.

View on arXiv
@article{zeng2025_2503.04830,
  title={ Cite Before You Speak: Enhancing Context-Response Grounding in E-commerce Conversational LLM-Agents },
  author={ Jingying Zeng and Hui Liu and Zhenwei Dai and Xianfeng Tang and Chen Luo and Samarth Varshney and Zhen Li and Qi He },
  journal={arXiv preprint arXiv:2503.04830},
  year={ 2025 }
}
Comments on this paper