22
0

Improving RAG for Personalization with Author Features and Contrastive Examples

Abstract

Personalization with retrieval-augmented generation (RAG) often fails to capture fine-grained features of authors, making it hard to identify their unique traits. To enrich the RAG context, we propose providing Large Language Models (LLMs) with author-specific features, such as average sentiment polarity and frequently used words, in addition to past samples from the author's profile. We introduce a new feature called Contrastive Examples: documents from other authors are retrieved to help LLM identify what makes an author's style unique in comparison to others. Our experiments show that adding a couple of sentences about the named entities, dependency patterns, and words a person uses frequently significantly improves personalized text generation. Combining features with contrastive examples boosts the performance further, achieving a relative 15% improvement over baseline RAG while outperforming the benchmarks. Our results show the value of fine-grained features for better personalization, while opening a new research dimension for including contrastive examples as a complement with RAG. We release our code publicly.

View on arXiv
@article{yazan2025_2504.08745,
  title={ Improving RAG for Personalization with Author Features and Contrastive Examples },
  author={ Mert Yazan and Suzan Verberne and Frederik Situmeang },
  journal={arXiv preprint arXiv:2504.08745},
  year={ 2025 }
}
Comments on this paper