ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16161
50
0

Towards Lighter and Robust Evaluation for Retrieval Augmented Generation

20 March 2025
Alex-Razvan Ispas
Charles-Elie Simon
Fabien Caspani
Vincent Guigue
    RALM
ArXivPDFHTML
Abstract

Large Language Models are prompting us to view more NLP tasks from a generative perspective. At the same time, they offer a new way of accessing information, mainly through the RAG framework. While there have been notable improvements for the autoregressive models, overcoming hallucination in the generated answers remains a continuous problem. A standard solution is to use commercial LLMs, such as GPT4, to evaluate these algorithms. However, such frameworks are expensive and not very transparent. Therefore, we propose a study which demonstrates the interest of open-weight models for evaluating RAG hallucination. We develop a lightweight approach using smaller, quantized LLMs to provide an accessible and interpretable metric that gives continuous scores for the generated answer with respect to their correctness and faithfulness. This score allows us to question decisions' reliability and explore thresholds to develop a new AUC metric as an alternative to correlation with human judgment.

View on arXiv
@article{ispas2025_2503.16161,
  title={ Towards Lighter and Robust Evaluation for Retrieval Augmented Generation },
  author={ Alex-Razvan Ispas and Charles-Elie Simon and Fabien Caspani and Vincent Guigue },
  journal={arXiv preprint arXiv:2503.16161},
  year={ 2025 }
}
Comments on this paper