ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17456
29
0

Language-specific Neurons Do Not Facilitate Cross-Lingual Transfer

21 March 2025
Soumen Kumar Mondal
Sayambhu Sen
Abhishek Singhania
P. Jyothi
ArXivPDFHTML
Abstract

Multilingual large language models (LLMs) aim towards robust natural language understanding across diverse languages, yet their performance significantly degrades on low-resource languages. This work explores whether existing techniques to identify language-specific neurons can be leveraged to enhance cross-lingual task performance of lowresource languages. We conduct detailed experiments covering existing language-specific neuron identification techniques (such as Language Activation Probability Entropy and activation probability-based thresholding) and neuron-specific LoRA fine-tuning with models like Llama 3.1 and Mistral Nemo. We find that such neuron-specific interventions are insufficient to yield cross-lingual improvements on downstream tasks (XNLI, XQuAD) in lowresource languages. This study highlights the challenges in achieving cross-lingual generalization and provides critical insights for multilingual LLMs.

View on arXiv
@article{mondal2025_2503.17456,
  title={ Language-specific Neurons Do Not Facilitate Cross-Lingual Transfer },
  author={ Soumen Kumar Mondal and Sayambhu Sen and Abhishek Singhania and Preethi Jyothi },
  journal={arXiv preprint arXiv:2503.17456},
  year={ 2025 }
}
Comments on this paper