ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.03715
69
0

Boosting Knowledge Graph-based Recommendations through Confidence-Aware Augmentation with Large Language Models

6 February 2025
Rui Cai
Chao Wang
Qianyi Cai
Dazhong Shen
Hui Xiong
    RALM
ArXivPDFHTML
Abstract

Knowledge Graph-based recommendations have gained significant attention due to their ability to leverage rich semantic relationships. However, constructing and maintaining Knowledge Graphs (KGs) is resource-intensive, and the accuracy of KGs can suffer from noisy, outdated, or irrelevant triplets. Recent advancements in Large Language Models (LLMs) offer a promising way to improve the quality and relevance of KGs for recommendation tasks. Despite this, integrating LLMs into KG-based systems presents challenges, such as efficiently augmenting KGs, addressing hallucinations, and developing effective joint learning methods. In this paper, we propose the Confidence-aware KG-based Recommendation Framework with LLM Augmentation (CKG-LLMA), a novel framework that combines KGs and LLMs for recommendation task. The framework includes: (1) an LLM-based subgraph augmenter for enriching KGs with high-quality information, (2) a confidence-aware message propagation mechanism to filter noisy triplets, and (3) a dual-view contrastive learning method to integrate user-item interactions and KG data. Additionally, we employ a confidence-aware explanation generation process to guide LLMs in producing realistic explanations for recommendations. Finally, extensive experiments demonstrate the effectiveness of CKG-LLMA across multiple public datasets.

View on arXiv
@article{cai2025_2502.03715,
  title={ Boosting Knowledge Graph-based Recommendations through Confidence-Aware Augmentation with Large Language Models },
  author={ Rui Cai and Chao Wang and Qianyi Cai and Dazhong Shen and Hui Xiong },
  journal={arXiv preprint arXiv:2502.03715},
  year={ 2025 }
}
Comments on this paper