Forging GEMs: Advancing Greek NLP through Quality-Based Corpus Curation

The advancement of natural language processing for morphologically rich and moderately-resourced languages like Modern Greek has been hindered by architectural stagnation, data scarcity, and limited context processing capabilities, particularly in specialized domains such as law. In this work, we propose the Greek Embedding Models (GEMs), a new family of transformer-based language models, specifically developed to address these limitations through architectural diversity and enhanced data curation. The proposed family of models are trained on several large-scale, meticulously curated corpora, encompassing both comprehensive general-domain datasets and specialized legal collections, addressing the persistent data scarcity that has impeded Greek language modeling advancement. The proposed quality-based corpus curation methodology incorporates extensive preprocessing pipelines, sophisticated deduplication strategies and targeted repetition of high-quality legal sub-corpora to enhance domain adaptation. The GEMs family comprises both established architectures (RoBERTa and Longformer) and advanced models not previously applied to Greek (ELECTRA, ConvBERT, and ModernBERT), providing comprehensive coverage of modern transformer designs. Additionally, we introduce the first bilingual Greek-English embedding models tailored for cross-lingual legal applications. Comprehensive evaluation across three core natural language understanding benchmarks demonstrates that the proposed GEM-RoBERTa and GEM-ConvBERT achieve statistically significant performance improvements over established state-of-the-art models, with accuracy gains of up to 3.6\% while conducted statistical analysis using Friedman Aligned-Ranks and Finner post-hoc tests confirms the superiority of our approach across multiple evaluation metrics.
View on arXiv