71
0

Training Large Recommendation Models via Graph-Language Token Alignment

Abstract

Recommender systems (RS) have become essential tools for helping users efficiently navigate the overwhelming amount of information on e-commerce and social platforms. However, traditional RS relying on Collaborative Filtering (CF) struggles to integrate the rich semantic information from textual data. Meanwhile, large language models (LLMs) have shown promising results in natural language processing, but directly using LLMs for recommendation introduces challenges, such as ambiguity in generating item predictions and inefficiencies in scalability. In this paper, we propose a novel framework to train Large Recommendation models via Graph-Language Token Alignment. By aligning item and user nodes from the interaction graph with pretrained LLM tokens, GLTA effectively leverages the reasoning abilities of LLMs. Furthermore, we introduce Graph-Language Logits Matching (GLLM) to optimize token alignment for end-to-end item prediction, eliminating ambiguity in the free-form text as recommendation results. Extensive experiments on three benchmark datasets demonstrate the effectiveness of GLTA, with ablation studies validating each component.

View on arXiv
@article{yang2025_2502.18757,
  title={ Training Large Recommendation Models via Graph-Language Token Alignment },
  author={ Mingdai Yang and Zhiwei Liu and Liangwei Yang and Xiaolong Liu and Chen Wang and Hao Peng and Philip S. Yu },
  journal={arXiv preprint arXiv:2502.18757},
  year={ 2025 }
}
Comments on this paper