26
0

Enhancing Coreference Resolution with Pretrained Language Models: Bridging the Gap Between Syntax and Semantics

Abstract

Large language models have made significant advancements in various natural language processing tasks, including coreference resolution. However, traditional methods often fall short in effectively distinguishing referential relationships due to a lack of integration between syntactic and semantic information. This study introduces an innovative framework aimed at enhancing coreference resolution by utilizing pretrained language models. Our approach combines syntax parsing with semantic role labeling to accurately capture finer distinctions in referential relationships. By employing state-of-the-art pretrained models to gather contextual embeddings and applying an attention mechanism for fine-tuning, we improve the performance of coreference tasks. Experimental results across diverse datasets show that our method surpasses conventional coreference resolution systems, achieving notable accuracy in disambiguating references. This development not only improves coreference resolution outcomes but also positively impacts other natural language processing tasks that depend on precise referential understanding.

View on arXiv
@article{liu2025_2504.05855,
  title={ Enhancing Coreference Resolution with Pretrained Language Models: Bridging the Gap Between Syntax and Semantics },
  author={ Xingzu Liu and Songhang deng and Mingbang Wang and Zhang Dong and Le Dai and Jiyuan Li and Ruilin Nong },
  journal={arXiv preprint arXiv:2504.05855},
  year={ 2025 }
}
Comments on this paper