ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.18457
70
1

CALM: Unleashing the Cross-Lingual Self-Aligning Ability of Language Model Question Answering

30 January 2025
Yumeng Wang
Zhiyuan Fan
Q. Wang
May Fung
Heng Ji
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are pretrained on extensive multilingual corpora to acquire both language-specific cultural knowledge and general knowledge. Ideally, while LLMs should provide consistent responses to culture-independent questions across languages, we observe significant performance disparities. To address this, we explore the Cross-Lingual Self-Aligning ability of Language Models (CALM) to align knowledge across languages. Specifically, for a given question, we sample multiple responses across different languages and select the most self-consistent response as the target, leaving the remaining responses as negative examples. We then employ direct preference optimization (DPO) to align the model's knowledge across different languages. Evaluations on the MEDQA and X-CSQA datasets demonstrate CALM's effectiveness in enhancing cross-lingual knowledge question answering, both in zero-shot and retrieval-augmented settings. We also found that increasing the number of languages involved in CALM training leads to higher accuracy and consistency. We offer a qualitative analysis of how cross-lingual consistency can enhance knowledge alignment and explore the method's generalizability.

View on arXiv
@article{wang2025_2501.18457,
  title={ CALM: Unleashing the Cross-Lingual Self-Aligning Ability of Language Model Question Answering },
  author={ Yumeng Wang and Zhiyuan Fan and Qingyun Wang and May Fung and Heng Ji },
  journal={arXiv preprint arXiv:2501.18457},
  year={ 2025 }
}
Comments on this paper