55
2

Evaluating Discourse Cohesion in Pre-trained Language Models

Abstract

Large pre-trained neural models have achieved remarkable success in natural language process (NLP), inspiring a growing body of research analyzing their ability from different aspects. In this paper, we propose a test suite to evaluate the cohesive ability of pre-trained language models. The test suite contains multiple cohesion phenomena between adjacent and non-adjacent sentences. We try to compare different pre-trained language models on these phenomena and analyze the experimental results,hoping more attention can be given to discourse cohesion in the future.

View on arXiv
@article{he2025_2503.06137,
  title={ Evaluating Discourse Cohesion in Pre-trained Language Models },
  author={ Jie He and Wanqiu Long and Deyi Xiong },
  journal={arXiv preprint arXiv:2503.06137},
  year={ 2025 }
}
Comments on this paper