LLMs have demonstrated impressive proficiency in generating coherent and high-quality text, making them valuable across a range of text-generation tasks. However, rigorous evaluation of this generated content is crucial, as ensuring its quality remains a significant challenge due to persistent issues such as factual inaccuracies and hallucination. This paper introduces three fine-tuned general-purpose LLM autoevaluators, REC-8B, REC-12B and REC-70B, specifically designed to evaluate generated text across several dimensions: faithfulness, instruction following, coherence, and completeness. These models not only provide ratings for these metrics but also offer detailed explanation and verifiable citation, thereby enhancing trust in the content. Moreover, the models support various citation modes, accommodating different requirements for latency and granularity. Extensive evaluations on diverse benchmarks demonstrate that our general-purpose LLM auto-evaluator, REC-70B, outperforms state-of-the-art LLMs, excelling in content evaluation by delivering better quality explanation and citation with minimal bias. It achieves Rank #1 as of Feb 15th, 2025 as a generative model on the RewardBench leaderboard under the model name TextEval-Llama3.1-70B. Our REC dataset and models are available atthis https URL.
View on arXiv@article{hsu2025_2411.02448, title={ Rate, Explain and Cite (REC): Enhanced Explanation and Attribution in Automatic Evaluation by Large Language Models }, author={ Aliyah R. Hsu and James Zhu and Zhichao Wang and Bin Bi and Shubham Mehrotra and Shiva K. Pentyala and Katherine Tan and Xiang-Bo Mao and Roshanak Omrani and Sougata Chaudhuri and Regunathan Radhakrishnan and Sitaram Asur and Claire Na Cheng and Bin Yu }, journal={arXiv preprint arXiv:2411.02448}, year={ 2025 } }