v1v2 (latest)
SJ_AJ@DravidianLangTech-EACL2021: Task-Adaptive Pre-Training of
Multilingual BERT models for Offensive Language Identification
Akshat Gupta
- VLM
Abstract
In this paper we present our submission for the EACL 2021-Shared Task on Offensive Language Identification in Dravidian languages. Our final system is an ensemble of mBERT and XLM-RoBERTa models which leverage task-adaptive pre-training of multilingual BERT models with a masked language modeling objective. Our system was ranked 1st for Kannada, 2nd for Malayalam and 3rd for Tamil.
View on arXivComments on this paper
