210

An Empirical Study of Using Pre-trained BERT Models for Vietnamese Relation Extraction Task at VLSP 2020

Abstract

In this paper, we present an empirical study of using pre-trained BERT models for relation extraction task at VLSP 2020 Evaluation Campaign. We applied two state-of-the-art BERT-based models: R-BERT and BERT model with entity starts. For each model, we compared two pre-trained BERT models: FPTAI/vibert and NlpHUST/vibert4news. We found that NlpHUST/vibert4news model significantly outperforms FPTAI/vibert for Vietnamese relation extraction task. Finally, we proposed a simple ensemble model which combines R-BERT and BERT with entity starts. Our proposed ensemble model slightly improved against two single models on the development data provided by the task organizers.

View on arXiv
Comments on this paper