Could Small Language Models Serve as Recommenders? Towards Data-centric
Cold-start Recommendations
- LRM
Recommendation systems help users find information that matches their interests based on their historical behaviors. However, generating personalized recommendations becomes challenging in the absence of historical user-item interactions, a practical problem for startups known as the system cold-start recommendation. Current research tackles user or item cold-start scenarios but lacks solutions for system cold-start. To tackle the problem, we initially propose PromptRec, a simple but effective approach based on in-context learning of language models, where we transform the recommendation task into the sentiment analysis task on natural language containing user and item profiles. However, this naive strategy heavily relied on the strong in-context learning ability emerged from large language models, which could suffer from significant latency for online recommendations. To fill this gap, we present a theoretical framework to formalize the connection between in-context recommendation and language modeling. Based on it, we propose to enhance small language models with a data-centric pipeline, which consists of: (1) constructing a refined corpus for model pre-training; (2) constructing a decomposed prompt template via prompt pre-training. They correspond to the development of training data and inference data, respectively. To evaluate our proposed method, we introduce a cold-start recommendation benchmark, and the results demonstrate that the enhanced small language models can achieve comparable cold-start recommendation performance to that of large models with only around 17% of their inference time. To the best of our knowledge, this is the first study to tackle the system cold-start recommendation problem. We believe our findings will provide valuable insights for future works. The benchmark and implementations are available at https://github.com/JacksonWuxs/PromptRec.
View on arXiv