233
v1v2 (latest)

Instruction Tuning on Public Government and Cultural Data for Low-Resource Language: a Case Study in Kazakh

Annual Meeting of the Association for Computational Linguistics (ACL), 2025
Main:8 Pages
8 Figures
Bibliography:4 Pages
21 Tables
Appendix:18 Pages
Abstract

Instruction tuning in low-resource languages remains underexplored due to limited text data, particularly in government and cultural domains. To address this, we introduce and open-source a large-scale (10,600 samples) instruction-following (IFT) dataset, covering key institutional and cultural knowledge relevant to Kazakhstan. Our dataset enhances LLMs' understanding of procedural, legal, and structural governance topics. We employ LLM-assisted data generation, comparing open-weight and closed-weight models for dataset construction, and select GPT-4o as the backbone. Each entity of our dataset undergoes full manual verification to ensure high quality. We also show that fine-tuning Qwen, Falcon, and Gemma on our dataset leads to consistent performance improvements in both multiple-choice and generative tasks, demonstrating the potential of LLM-assisted instruction tuning for low-resource languages.

View on arXiv
Comments on this paper