ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13783
43
1

Generative Large Recommendation Models: Emerging Trends in LLMs for Recommendation

20 February 2025
Hao Wang
Wei Guo
L. Zhang
Jin Yao Chin
Yufei Ye
Huifeng Guo
Y. Liu
Defu Lian
Ruiming Tang
Enhong Chen
ArXivPDFHTML
Abstract

In the era of information overload, recommendation systems play a pivotal role in filtering data and delivering personalized content. Recent advancements in feature interaction and user behavior modeling have significantly enhanced the recall and ranking processes of these systems. With the rise of large language models (LLMs), new opportunities have emerged to further improve recommendation systems. This tutorial explores two primary approaches for integrating LLMs: LLMs-enhanced recommendations, which leverage the reasoning capabilities of general LLMs, and generative large recommendation models, which focus on scaling and sophistication. While the former has been extensively covered in existing literature, the latter remains underexplored. This tutorial aims to fill this gap by providing a comprehensive overview of generative large recommendation models, including their recent advancements, challenges, and potential research directions. Key topics include data quality, scaling laws, user behavior mining, and efficiency in training and inference. By engaging with this tutorial, participants will gain insights into the latest developments and future opportunities in the field, aiding both academic research and practical applications. The timely nature of this exploration supports the rapid evolution of recommendation systems, offering valuable guidance for researchers and practitioners alike.

View on arXiv
@article{wang2025_2502.13783,
  title={ Generative Large Recommendation Models: Emerging Trends in LLMs for Recommendation },
  author={ Hao Wang and Wei Guo and Luankang Zhang and Jin Yao Chin and Yufei Ye and Huifeng Guo and Yong Liu and Defu Lian and Ruiming Tang and Enhong Chen },
  journal={arXiv preprint arXiv:2502.13783},
  year={ 2025 }
}
Comments on this paper