ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.10041
15
3

Looking for Clues of Language in Multilingual BERT to Improve Cross-lingual Generalization

20 October 2020
Chi-Liang Liu
Tsung-Yuan Hsu
Yung-Sung Chuang
Chung-Yi Li
Hung-yi Lee
ArXivPDFHTML
Abstract

Token embeddings in multilingual BERT (m-BERT) contain both language and semantic information. We find that the representation of a language can be obtained by simply averaging the embeddings of the tokens of the language. Given this language representation, we control the output languages of multilingual BERT by manipulating the token embeddings, thus achieving unsupervised token translation. We further propose a computationally cheap but effective approach to improve the cross-lingual ability of m-BERT based on this observation.

View on arXiv
Comments on this paper