Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.16138
Cited By
XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
30 June 2021
Zewen Chi
Shaohan Huang
Li Dong
Shuming Ma
Bo Zheng
Saksham Singhal
Payal Bajaj
Xia Song
Xian-Ling Mao
Heyan Huang
Furu Wei
Re-assign community
ArXiv
PDF
HTML
Papers citing
"XLM-E: Cross-lingual Language Model Pre-training via ELECTRA"
29 / 29 papers shown
Title
Cross-lingual Transfer of Reward Models in Multilingual Alignment
Jiwoo Hong
Noah Lee
Rodrigo Martínez-Castaño
César Rodríguez
James Thorne
46
4
0
23 Oct 2024
Speech Representation Learning Revisited: The Necessity of Separate Learnable Parameters and Robust Data Augmentation
Hemant Yadav
Sunayana Sitaram
R. Shah
SSL
47
0
0
20 Aug 2024
Sequential Contrastive Audio-Visual Learning
Ioannis Tsiamas
Santiago Pascual
Chunghsin Yeh
Joan Serra
33
2
0
08 Jul 2024
YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation
Sungkyun Chang
Emmanouil Benetos
Holger Kirchhoff
Simon Dixon
29
2
0
05 Jul 2024
UniPSDA: Unsupervised Pseudo Semantic Data Augmentation for Zero-Shot Cross-Lingual Natural Language Understanding
Dongyang Li
Taolin Zhang
Jiali Deng
Longtao Huang
Chengyu Wang
Xiaofeng He
Hui Xue
29
1
0
24 Jun 2024
Fine-Tuned Self-Supervised Speech Representations for Language Diarization in Multilingual Code-Switched Speech
Geoffrey T. Frost
Emily Morris
Joshua Jansen van Vüren
T. Niesler
26
2
0
15 Dec 2023
Interpreting Indirect Answers to Yes-No Questions in Multiple Languages
Zijie Wang
Md Mosharaf Hossain
Shivam Mathur
Terry Cruz Melo
Kadir Bulut Ozler
...
Jacob Quintero
MohammadHossein Rezaei
Shreya Nupur Shakya
Md Nayem Uddin
Eduardo Blanco
24
1
0
20 Oct 2023
Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages
Ayyoob Imani
Peiqin Lin
Amir Hossein Kargaran
Silvia Severini
Masoud Jalili Sabet
...
Chunlan Ma
Helmut Schmid
André F. T. Martins
François Yvon
Hinrich Schütze
ALM
LRM
29
95
0
20 May 2023
Measuring the Instability of Fine-Tuning
Yupei Du
D. Nguyen
18
4
0
15 Feb 2023
GanLM: Encoder-Decoder Pre-training with an Auxiliary Discriminator
Jian Yang
Shuming Ma
Li Dong
Shaohan Huang
Haoyang Huang
Yuwei Yin
Dongdong Zhang
Liqun Yang
Furu Wei
Zhoujun Li
SyDa
AI4CE
27
25
0
20 Dec 2022
TESSP: Text-Enhanced Self-Supervised Speech Pre-training
Zhuoyuan Yao
Shuo Ren
Sanyuan Chen
Ziyang Ma
Pengcheng Guo
Linfu Xie
22
5
0
24 Nov 2022
Exploring WavLM on Speech Enhancement
Hyungchan Song
Sanyuan Chen
Zhuo Chen
Yu-Huan Wu
Takuya Yoshioka
M. Tang
Jong Won Shin
Shujie Liu
8
16
0
18 Nov 2022
Speech separation with large-scale self-supervised learning
Zhuo Chen
Naoyuki Kanda
Jian Wu
Yu-Huan Wu
Xiaofei Wang
Takuya Yoshioka
Jinyu Li
S. Sivasankaran
Sefik Emre Eskimez
17
13
0
09 Nov 2022
Training Dynamics for Curriculum Learning: A Study on Monolingual and Cross-lingual NLU
Fenia Christopoulou
Gerasimos Lampouras
Ignacio Iacobacci
24
3
0
22 Oct 2022
Short Text Pre-training with Extended Token Classification for E-commerce Query Understanding
Haoming Jiang
Tianyu Cao
Zheng Li
Cheng-hsin Luo
Xianfeng Tang
Qingyu Yin
Danqing Zhang
R. Goutam
Bing Yin
RALM
16
11
0
08 Oct 2022
Cross-Lingual Phrase Retrieval
Heqi Zheng
Xiao Zhang
Zewen Chi
Heyan Huang
T. Yan
Tian Lan
Wei Wei
Xian-Ling Mao
RALM
LRM
27
3
0
19 Apr 2022
DeepNet: Scaling Transformers to 1,000 Layers
Hongyu Wang
Shuming Ma
Li Dong
Shaohan Huang
Dongdong Zhang
Furu Wei
MoE
AI4CE
15
155
0
01 Mar 2022
FLAVA: A Foundational Language And Vision Alignment Model
Amanpreet Singh
Ronghang Hu
Vedanuj Goswami
Guillaume Couairon
Wojciech Galuba
Marcus Rohrbach
Douwe Kiela
CLIP
VLM
38
686
0
08 Dec 2021
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
Pengcheng He
Jianfeng Gao
Weizhu Chen
16
1,113
0
18 Nov 2021
WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Sanyuan Chen
Chengyi Wang
Zhengyang Chen
Yu-Huan Wu
Shujie Liu
...
Yao Qian
Jian Wu
Micheal Zeng
Xiangzhan Yu
Furu Wei
SSL
73
1,694
0
26 Oct 2021
Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training
Bo Zheng
Li Dong
Shaohan Huang
Saksham Singhal
Wanxiang Che
Ting Liu
Xia Song
Furu Wei
VLM
13
22
0
15 Sep 2021
A Primer on Pretrained Multilingual Language Models
Sumanth Doddapaneni
Gowtham Ramesh
Mitesh M. Khapra
Anoop Kunchukuttan
Pratyush Kumar
LRM
35
73
0
01 Jul 2021
Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment
Zewen Chi
Li Dong
Bo Zheng
Shaohan Huang
Xian-Ling Mao
Heyan Huang
Furu Wei
45
67
0
11 Jun 2021
Larger-Scale Transformers for Multilingual Masked Language Modeling
Naman Goyal
Jingfei Du
Myle Ott
Giridhar Anantharaman
Alexis Conneau
88
98
0
02 May 2021
MT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs
Zewen Chi
Li Dong
Shuming Ma
Shaohan Huang Xian-Ling Mao
Heyan Huang
Furu Wei
LRM
45
71
0
18 Apr 2021
COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Yu Meng
Chenyan Xiong
Payal Bajaj
Saurabh Tiwary
Paul N. Bennett
Jiawei Han
Xia Song
119
202
0
16 Feb 2021
Multilingual BERT Post-Pretraining Alignment
Lin Pan
Chung-Wei Hang
Haode Qi
Abhishek Shah
Saloni Potdar
Mo Yu
96
44
0
23 Oct 2020
MLQA: Evaluating Cross-lingual Extractive Question Answering
Patrick Lewis
Barlas Oğuz
Ruty Rinott
Sebastian Riedel
Holger Schwenk
ELM
242
490
0
16 Oct 2019
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
196
1,363
0
06 Jun 2016
1