Batayan: A Filipino NLP benchmark for evaluating Large Language Models

Recent advances in large language models (LLMs) have demonstrated remarkable capabilities on widely benchmarked high-resource languages; however, linguistic nuances of under-resourced languages remain unexplored. We introduce Batayan, a holistic Filipino benchmark designed to systematically evaluate LLMs across three key natural language processing (NLP) competencies: understanding, reasoning, and generation. Batayan consolidates eight tasks, covering both Tagalog and code-switched Taglish utterances. Our rigorous, native-speaker-driven annotation process ensures fluency and authenticity to the complex morphological and syntactic structures of Filipino, alleviating a pervasive translationese bias in existing Filipino corpora. We report empirical results on a variety of multilingual LLMs, highlighting significant performance gaps that signal the under-representation of Filipino in pretraining corpora, the unique hurdles in modeling Filipino's rich morphology and construction, and the importance of explicit Filipino language support and instruction tuning. Moreover, we discuss the practical challenges encountered in dataset construction and propose principled solutions for building culturally and linguistically-faithful resources in under-represented languages. We also provide a public benchmark and leaderboard as a clear foundation for iterative, community-driven progress in Filipino NLP.
View on arXiv@article{montalan2025_2502.14911, title={ Batayan: A Filipino NLP benchmark for evaluating Large Language Models }, author={ Jann Railey Montalan and Jimson Paulo Layacan and David Demitri Africa and Richell Isaiah Flores and Michael T. Lopez II and Theresa Denise Magsajo and Anjanette Cayabyab and William Chandra Tjhi }, journal={arXiv preprint arXiv:2502.14911}, year={ 2025 } }