ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.12356
  4. Cited By
Do Multi-Lingual Pre-trained Language Models Reveal Consistent Token
  Attributions in Different Languages?

Do Multi-Lingual Pre-trained Language Models Reveal Consistent Token Attributions in Different Languages?

23 December 2021
Junxiang Wang
Xuchao Zhang
Bo Zong
Yanchi Liu
Wei Cheng
Jingchao Ni
Haifeng Chen
Liang Zhao
    AAML
ArXivPDFHTML

Papers citing "Do Multi-Lingual Pre-trained Language Models Reveal Consistent Token Attributions in Different Languages?"

1 / 1 papers shown
Title
Investigating Multilingual NMT Representations at Scale
Investigating Multilingual NMT Representations at Scale
Sneha Kudugunta
Ankur Bapna
Isaac Caswell
N. Arivazhagan
Orhan Firat
LRM
138
120
0
05 Sep 2019
1