30
0

Benchmarking Multi-National Value Alignment for Large Language Models

Abstract

Do Large Language Models (LLMs) hold positions that conflict with your country's values? Occasionally they do! However, existing works primarily focus on ethical reviews, failing to capture the diversity of national values, which encompass broader policy, legal, and moral considerations. Furthermore, current benchmarks that rely on spectrum tests using manually designed questionnaires are not easily scalable.To address these limitations, we introduce NaVAB, a comprehensive benchmark to evaluate the alignment of LLMs with the values of five major nations: China, the United States, the United Kingdom, France, and Germany. NaVAB implements a national value extraction pipeline to efficiently construct value assessment datasets. Specifically, we propose a modeling procedure with instruction tagging to process raw data sources, a screening process to filter value-related topics and a generation process with a Conflict Reduction mechanism to filter non-conflictingthis http URLconduct extensive experiments on various LLMs across countries, and the results provide insights into assisting in the identification of misaligned scenarios. Moreover, we demonstrate that NaVAB can be combined with alignment techniques to effectively reduce value concerns by aligning LLMs' values with the target country.

View on arXiv
@article{shi2025_2504.12911,
  title={ Benchmarking Multi-National Value Alignment for Large Language Models },
  author={ Weijie Shi and Chengyi Ju and Chengzhong Liu and Jiaming Ji and Jipeng Zhang and Ruiyuan Zhang and Jia Zhu and Jiajie Xu and Yaodong Yang and Sirui Han and Yike Guo },
  journal={arXiv preprint arXiv:2504.12911},
  year={ 2025 }
}
Comments on this paper