ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12051
52
2

TLUE: A Tibetan Language Understanding Evaluation Benchmark

15 March 2025
Fan Gao
Cheng Huang
Nyima Tashi
Xiangxiang Wang
Thupten Tsering
Ban Ma-bao
Renzeg Duojie
Gadeng Luosang
Rinchen Dongrub
Dorje Tashi
Xiao Feng
Yongbin Yu
    ELM
ArXivPDFHTML
Abstract

Large language models (LLMs) have made tremendous progress in recent years, but low-resource languages, such as Tibetan, remain significantly underrepresented in their evaluation. Despite Tibetan being spoken by over seven million people, it has largely been neglected in the development and assessment of LLMs. To address this gap, we present TLUE (A Tibetan Language Understanding Evaluation Benchmark), the first large-scale benchmark for assessing LLMs' capabilities in Tibetan. TLUE comprises two major components: (1) a comprehensive multi-task understanding benchmark spanning 5 domains and 67 subdomains, and (2) a safety benchmark covering 7 subdomains. We evaluate a diverse set of state-of-the-art LLMs. Experimental results demonstrate that most LLMs perform below the random baseline, highlighting the considerable challenges LLMs face in processing Tibetan, a low-resource language. TLUE provides an essential foundation for driving future research and progress in Tibetan language understanding and underscores the need for greater inclusivity in LLM development.

View on arXiv
@article{gao2025_2503.12051,
  title={ TLUE: A Tibetan Language Understanding Evaluation Benchmark },
  author={ Fan Gao and Cheng Huang and Nyima Tashi and Xiangxiang Wang and Thupten Tsering and Ban Ma-bao and Renzeg Duojie and Gadeng Luosang and Rinchen Dongrub and Dorje Tashi and Xiao Feng and Yongbin Yu },
  journal={arXiv preprint arXiv:2503.12051},
  year={ 2025 }
}
Comments on this paper