69
0

Adapting Large Language Models for Multi-Domain Retrieval-Augmented-Generation

Abstract

Retrieval-Augmented Generation (RAG) enhances LLM factuality, but multi-domain applications face challenges like lack of diverse benchmarks and poor out-of-domain generalization. The first contribution of this work is to introduce a diverse benchmark comprising a variety of question-answering tasks from 8 sources and covering 13 domains. Our second contribution consists in systematically testing out-of-domain generalization for typical RAG tuning strategies. While our findings reveal that standard fine-tuning fails to generalize effectively, we show that sequence-level distillation with teacher-generated labels improves out-of-domain performance by providing more coherent supervision. Our findings highlight key strategies for improving multi-domain RAG robustness.

View on arXiv
@article{misrahi2025_2504.02411,
  title={ Adapting Large Language Models for Multi-Domain Retrieval-Augmented-Generation },
  author={ Alexandre Misrahi and Nadezhda Chirkova and Maxime Louis and Vassilina Nikoulina },
  journal={arXiv preprint arXiv:2504.02411},
  year={ 2025 }
}
Comments on this paper