ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04064
46
0

Uncovering inequalities in new knowledge learning by large language models across different languages

6 March 2025
Chenglong Wang
Haoyu Tang
Xiyuan Yang
Yueqi Xie
Jina Suh
Sunayana Sitaram
Junming Huang
Yu Xie
Zhaoya Gong
Xing Xie
Fangzhao Wu
ArXivPDFHTML
Abstract

As large language models (LLMs) gradually become integral tools for problem solving in daily life worldwide, understanding linguistic inequality is becoming increasingly important. Existing research has primarily focused on static analyses that assess the disparities in the existing knowledge and capabilities of LLMs across languages. However, LLMs are continuously evolving, acquiring new knowledge to generate up-to-date, domain-specific responses. Investigating linguistic inequalities within this dynamic process is, therefore, also essential. In this paper, we explore inequalities in new knowledge learning by LLMs across different languages and four key dimensions: effectiveness, transferability, prioritization, and robustness. Through extensive experiments under two settings (in-context learning and fine-tuning) using both proprietary and open-source models, we demonstrate that low-resource languages consistently face disadvantages across all four dimensions. By shedding light on these disparities, we aim to raise awareness of linguistic inequalities in LLMs' new knowledge learning, fostering the development of more inclusive and equitable future LLMs.

View on arXiv
@article{wang2025_2503.04064,
  title={ Uncovering inequalities in new knowledge learning by large language models across different languages },
  author={ Chenglong Wang and Haoyu Tang and Xiyuan Yang and Yueqi Xie and Jina Suh and Sunayana Sitaram and Junming Huang and Yu Xie and Zhaoya Gong and Xing Xie and Fangzhao Wu },
  journal={arXiv preprint arXiv:2503.04064},
  year={ 2025 }
}
Comments on this paper