26
1

Evaluating Polish linguistic and cultural competency in large language models

Abstract

Large language models (LLMs) are becoming increasingly proficient in processing and generating multilingual texts, which allows them to address real-world problems more effectively. However, language understanding is a far more complex issue that goes beyond simple text analysis. It requires familiarity with cultural context, including references to everyday life, historical events, traditions, folklore, literature, and pop culture. A lack of such knowledge can lead to misinterpretations and subtle, hard-to-detect errors. To examine language models' knowledge of the Polish cultural context, we introduce the Polish linguistic and cultural competency benchmark, consisting of 600 manually crafted questions. The benchmark is divided into six categories: history, geography, culture & tradition, art & entertainment, grammar, and vocabulary. As part of our study, we conduct an extensive evaluation involving over 30 open-weight and commercial LLMs. Our experiments provide a new perspective on Polish competencies in language models, moving past traditional natural language processing tasks and general knowledge assessment.

View on arXiv
@article{dadas2025_2503.00995,
  title={ Evaluating Polish linguistic and cultural competency in large language models },
  author={ Sławomir Dadas and Małgorzata Grębowiec and Michał Perełkiewicz and Rafał Poświata },
  journal={arXiv preprint arXiv:2503.00995},
  year={ 2025 }
}
Comments on this paper