14
0

Subasa -- Adapting Language Models for Low-resourced Offensive Language Detection in Sinhala

Abstract

Accurate detection of offensive language is essential for a number of applications related to social media safety. There is a sharp contrast in performance in this task between low and high-resource languages. In this paper, we adapt fine-tuning strategies that have not been previously explored for Sinhala in the downstream task of offensive language detection. Using this approach, we introduce four models: "Subasa-XLM-R", which incorporates an intermediate Pre-Finetuning step using Masked Rationale Prediction. Two variants of "Subasa-Llama" and "Subasa-Mistral", are fine-tuned versions of Llama (3.2) and Mistral (v0.3), respectively, with a task-specific strategy. We evaluate our models on the SOLD benchmark dataset for Sinhala offensive language detection. All our models outperform existing baselines. Subasa-XLM-R achieves the highest Macro F1 score (0.84) surpassing state-of-the-art large language models like GPT-4o when evaluated on the same SOLD benchmark dataset under zero-shot settings. The models and code are publicly available.

View on arXiv
@article{haturusinghe2025_2504.02178,
  title={ Subasa -- Adapting Language Models for Low-resourced Offensive Language Detection in Sinhala },
  author={ Shanilka Haturusinghe and Tharindu Cyril Weerasooriya and Marcos Zampieri and Christopher M. Homan and S.R. Liyanage },
  journal={arXiv preprint arXiv:2504.02178},
  year={ 2025 }
}
Comments on this paper