ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15237
39
1

From Documents to Dialogue: Building KG-RAG Enhanced AI Assistants

24 February 2025
Manisha Mukherjee
Sungchul Kim
Xiang Chen
Dan Luo
Tong Yu
Tung Mai
    RALM
ArXivPDFHTML
Abstract

The Adobe Experience Platform AI Assistant is a conversational tool that enables organizations to interact seamlessly with proprietary enterprise data through a chatbot. However, due to access restrictions, Large Language Models (LLMs) cannot retrieve these internal documents, limiting their ability to generate accurate zero-shot responses. To overcome this limitation, we use a Retrieval-Augmented Generation (RAG) framework powered by a Knowledge Graph (KG) to retrieve relevant information from external knowledge sources, enabling LLMs to answer questions over private or previously unseen document collections. In this paper, we propose a novel approach for building a high-quality, low-noise KG. We apply several techniques, including incremental entity resolution using seed concepts, similarity-based filtering to deduplicate entries, assigning confidence scores to entity-relation pairs to filter for high-confidence pairs, and linking facts to source documents for provenance. Our KG-RAG system retrieves relevant tuples, which are added to the user prompts context before being sent to the LLM generating the response. Our evaluation demonstrates that this approach significantly enhances response relevance, reducing irrelevant answers by over 50% and increasing fully relevant answers by 88% compared to the existing production system.

View on arXiv
@article{mukherjee2025_2502.15237,
  title={ From Documents to Dialogue: Building KG-RAG Enhanced AI Assistants },
  author={ Manisha Mukherjee and Sungchul Kim and Xiang Chen and Dan Luo and Tong Yu and Tung Mai },
  journal={arXiv preprint arXiv:2502.15237},
  year={ 2025 }
}
Comments on this paper