PARAPHRASUS : A Comprehensive Benchmark for Evaluating Paraphrase
Detection Models
International Conference on Computational Linguistics (COLING), 2024
Main:9 Pages
7 Figures
Bibliography:3 Pages
8 Tables
Appendix:2 Pages
Abstract
The task of determining whether two texts are paraphrases has long been a challenge in NLP. However, the prevailing notion of paraphrase is often quite simplistic, offering only a limited view of the vast spectrum of paraphrase phenomena. Indeed, we find that evaluating models in a paraphrase dataset can leave uncertainty about their true semantic understanding. To alleviate this, we release paraphrasus, a benchmark designed for multi-dimensional assessment of paraphrase detection models and finer model selection. We find that paraphrase detection models under a fine-grained evaluation lens exhibit trade-offs that cannot be captured through a single classification dataset.
View on arXivComments on this paper
