42
3

End-to-end Training for Recommendation with Language-based User Profiles

Abstract

There is a growing interest in natural language-based user profiles for recommender systems, which aims to enhance transparency and scrutability compared with embedding-based methods. Existing studies primarily generate these profiles using zero-shot inference from large language models (LLMs), but their quality remains insufficient, leading to suboptimal recommendation performance. In this paper, we introduce LangPTune, the first end-to-end training framework to optimize LLM-generated user profiles. Our method significantly outperforms zero-shot approaches by explicitly training the LLM for the recommendation objective. Through extensive evaluations across diverse training configurations and benchmarks, we demonstrate that LangPTune not only surpasses zero-shot baselines but can also matches the performance of state-of-the-art embedding-based methods. Finally, we investigate whether the training procedure preserves the interpretability of these profiles compared to zero-shot inference through both GPT-4 simulations and crowdworker user studies. Implementation of LangPTune can be found atthis https URL.

View on arXiv
@article{gao2025_2410.18870,
  title={ End-to-end Training for Recommendation with Language-based User Profiles },
  author={ Zhaolin Gao and Joyce Zhou and Yijia Dai and Thorsten Joachims },
  journal={arXiv preprint arXiv:2410.18870},
  year={ 2025 }
}
Comments on this paper