SEval-Ex: A Statement-Level Framework for Explainable Summarization Evaluation

Evaluating text summarization quality remains a critical challenge in Natural Language Processing. Current approaches face a trade-off between performance and interpretability. We present SEval-Ex, a framework that bridges this gap by decomposing summarization evaluation into atomic statements, enabling both high performance and explainability. SEval-Ex employs a two-stage pipeline: first extracting atomic statements from text source and summary using LLM, then a matching between generated statements. Unlike existing approaches that provide only summary-level scores, our method generates detailed evidence for its decisions through statement-level alignments. Experiments on the SummEval benchmark demonstrate that SEval-Ex achieves state-of-the-art performance with 0.580 correlation on consistency with human consistency judgments, surpassing GPT-4 based evaluators (0.521) while maintaining interpretability. Finally, our framework shows robustness against hallucination.
View on arXiv@article{herserant2025_2505.02235, title={ SEval-Ex: A Statement-Level Framework for Explainable Summarization Evaluation }, author={ Tanguy Herserant and Vincent Guigue }, journal={arXiv preprint arXiv:2505.02235}, year={ 2025 } }