256
v1v2 (latest)

Enhancing High-order Interaction Awareness in LLM-based Recommender Model

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Main:9 Pages
10 Figures
Bibliography:3 Pages
10 Tables
Appendix:4 Pages
Abstract

Large language models (LLMs) have demonstrated prominent reasoning capabilities in recommendation tasks by transforming them into text-generation tasks. However, existing approaches either disregard or ineffectively model the user-item high-order interactions. To this end, this paper presents an enhanced LLM-based recommender (ELMRec). We enhance whole-word embeddings to substantially enhance LLMs' interpretation of graph-constructed interactions for recommendations, without requiring graph pre-training. This finding may inspire endeavors to incorporate rich knowledge graphs into LLM-based recommenders via whole-word embedding. We also found that LLMs often recommend items based on users' earlier interactions rather than recent ones, and present a reranking solution. Our ELMRec outperforms state-of-the-art (SOTA) methods in both direct and sequential recommendations.

View on arXiv
Comments on this paper