LLM-based Bi-level Multi-interest Learning Framework for Sequential Recommendation

Sequential recommendation (SR) leverages users' dynamic preferences, with recent advances incorporating multi-interest learning to model diverse user interests. However, most multi-interest SR models rely on noisy, sparse implicit feedback, limiting recommendation accuracy. Large language models (LLMs) offer robust reasoning on low-quality data but face high computational costs and latency challenges for SR integration. We propose a novel LLM-based multi-interest SR framework combining implicit behavioral and explicit semantic perspectives. It includes two modules: the Implicit Behavioral Interest Module (IBIM), which learns from user behavior using a traditional SR model, and the Explicit Semantic Interest Module (ESIM), which uses clustering and prompt-engineered LLMs to extract semantic multi-interest representations from informative samples. Semantic insights from ESIM enhance IBIM's behavioral representations via modality alignment and semantic prediction tasks. During inference, only IBIM is used, ensuring efficient, LLM-free recommendations. Experiments on four real-world datasets validate the framework's effectiveness and practicality.
View on arXiv@article{qiao2025_2411.09410, title={ LLM-based Bi-level Multi-interest Learning Framework for Sequential Recommendation }, author={ Shutong Qiao and Chen Gao and Wei Yuan and Yong Li and Hongzhi Yin }, journal={arXiv preprint arXiv:2411.09410}, year={ 2025 } }