39
0

Beyond English: Unveiling Multilingual Bias in LLM Copyright Compliance

Abstract

Large Language Models (LLMs) have raised significant concerns regarding the fair use of copyright-protected content. While prior studies have examined the extent to which LLMs reproduce copyrighted materials, they have predominantly focused on English, neglecting multilingual dimensions of copyright protection. In this work, we investigate multilingual biases in LLM copyright protection by addressing two key questions: (1) Do LLMs exhibit bias in protecting copyrighted works across languages? (2) Is it easier to elicit copyrighted content using prompts in specific languages? To explore these questions, we construct a dataset of popular song lyrics in English, French, Chinese, and Korean and systematically probe seven LLMs using prompts in these languages. Our findings reveal significant imbalances in LLMs' handling of copyrighted content, both in terms of the language of the copyrighted material and the language of the prompt. These results highlight the need for further research and development of more robust, language-agnostic copyright protection mechanisms to ensure fair and consistent protection across languages.

View on arXiv
@article{chen2025_2503.05713,
  title={ Beyond English: Unveiling Multilingual Bias in LLM Copyright Compliance },
  author={ Yupeng Chen and Xiaoyu Zhang and Yixian Huang and Qian Xie },
  journal={arXiv preprint arXiv:2503.05713},
  year={ 2025 }
}
Comments on this paper