ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10150
29
0

HistLLM: A Unified Framework for LLM-Based Multimodal Recommendation with User History Encoding and Compression

14 April 2025
Chen Zhang
Bo Hu
Weidong Chen
Zhendong Mao
ArXivPDFHTML
Abstract

While large language models (LLMs) have proven effective in leveraging textual data for recommendations, their application to multimodal recommendation tasks remains relatively underexplored. Although LLMs can process multimodal information through projection functions that map visual features into their semantic space, recommendation tasks often require representing users' history interactions through lengthy prompts combining text and visual elements, which not only hampers training and inference efficiency but also makes it difficult for the model to accurately capture user preferences from complex and extended prompts, leading to reduced recommendation performance. To address this challenge, we introduce HistLLM, an innovative multimodal recommendation framework that integrates textual and visual features through a User History Encoding Module (UHEM), compressing multimodal user history interactions into a single token representation, effectively facilitating LLMs in processing user preferences. Extensive experiments demonstrate the effectiveness and efficiency of our proposed mechanism.

View on arXiv
@article{zhang2025_2504.10150,
  title={ HistLLM: A Unified Framework for LLM-Based Multimodal Recommendation with User History Encoding and Compression },
  author={ Chen Zhang and Bo Hu and Weidong Chen and Zhendong Mao },
  journal={arXiv preprint arXiv:2504.10150},
  year={ 2025 }
}
Comments on this paper