ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10945
12
0

Semantic Aware Linear Transfer by Recycling Pre-trained Language Models for Cross-lingual Transfer

16 May 2025
Seungyoon Lee
Seongtae Hong
Hyeonseok Moon
Heuiseok Lim
    KELM
ArXivPDFHTML
Abstract

Large Language Models (LLMs) increasingly incorporate multilingual capabilities, fueling the demand to transfer them into target language-specific models. However, most approaches, which blend the source model's embedding by replacing the source vocabulary with the target language-specific vocabulary, may constrain expressive capacity in the target language since the source model is predominantly trained on English data. In this paper, we propose Semantic Aware Linear Transfer (SALT), a novel cross-lingual transfer technique that recycles embeddings from target language Pre-trained Language Models (PLMs) to transmit the deep representational strengths of PLM-derived embedding to LLMs. SALT derives unique regression lines based on the similarity in the overlap of the source and target vocabularies, to handle each non-overlapping token's embedding space. Our extensive experiments show that SALT significantly outperforms other transfer methods and achieves lower loss with accelerating faster convergence during language adaptation. Notably, SALT obtains remarkable performance in cross-lingual understanding setups compared to other methods. Furthermore, we highlight the scalable use of PLMs to enhance the functionality of contemporary LLMs by conducting experiments with varying architectures.

View on arXiv
@article{lee2025_2505.10945,
  title={ Semantic Aware Linear Transfer by Recycling Pre-trained Language Models for Cross-lingual Transfer },
  author={ Seungyoon Lee and Seongtae Hong and Hyeonseok Moon and Heuiseok Lim },
  journal={arXiv preprint arXiv:2505.10945},
  year={ 2025 }
}
Comments on this paper