30
0

LLM-Independent Adaptive RAG: Let the Question Speak for Itself

Abstract

Large Language Models~(LLMs) are prone to hallucinations, and Retrieval-Augmented Generation (RAG) helps mitigate this, but at a high computational cost while risking misinformation. Adaptive retrieval aims to retrieve only when necessary, but existing approaches rely on LLM-based uncertainty estimation, which remain inefficient and impractical. In this study, we introduce lightweight LLM-independent adaptive retrieval methods based on external information. We investigated 27 features, organized into 7 groups, and their hybrid combinations. We evaluated these methods on 6 QA datasets, assessing the QA performance and efficiency. The results show that our approach matches the performance of complex LLM-based methods while achieving significant efficiency gains, demonstrating the potential of external information for adaptive retrieval.

View on arXiv
@article{marina2025_2505.04253,
  title={ LLM-Independent Adaptive RAG: Let the Question Speak for Itself },
  author={ Maria Marina and Nikolay Ivanov and Sergey Pletenev and Mikhail Salnikov and Daria Galimzianova and Nikita Krayko and Vasily Konovalov and Alexander Panchenko and Viktor Moskvoretskii },
  journal={arXiv preprint arXiv:2505.04253},
  year={ 2025 }
}
Comments on this paper