ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.10472
  4. Cited By
Multilingual Unsupervised Neural Machine Translation with Denoising
  Adapters

Multilingual Unsupervised Neural Machine Translation with Denoising Adapters

20 October 2021
A. Ustun
Alexandre Berard
Laurent Besacier
Matthias Gallé
ArXivPDFHTML

Papers citing "Multilingual Unsupervised Neural Machine Translation with Denoising Adapters"

9 / 9 papers shown
Title
Unlocking Parameter-Efficient Fine-Tuning for Low-Resource Language
  Translation
Unlocking Parameter-Efficient Fine-Tuning for Low-Resource Language Translation
Tong Su
Xin Peng
Sarubi Thillainathan
David Guzmán
Surangika Ranathunga
En-Shiun Annie Lee
27
2
0
05 Apr 2024
Extrapolating Multilingual Understanding Models as Multilingual
  Generators
Extrapolating Multilingual Understanding Models as Multilingual Generators
Bohong Wu
Fei Yuan
Hai Zhao
Lei Li
Jingjing Xu
AI4CE
25
2
0
22 May 2023
BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
Zheng-Xin Yong
Hailey Schoelkopf
Niklas Muennighoff
Alham Fikri Aji
David Ifeoluwa Adelani
...
Genta Indra Winata
Stella Biderman
Edward Raff
Dragomir R. Radev
Vassilina Nikoulina
CLL
VLM
AI4CE
LRM
27
81
0
19 Dec 2022
u-HuBERT: Unified Mixed-Modal Speech Pretraining And Zero-Shot Transfer
  to Unlabeled Modality
u-HuBERT: Unified Mixed-Modal Speech Pretraining And Zero-Shot Transfer to Unlabeled Modality
Wei-Ning Hsu
Bowen Shi
SSL
VLM
14
41
0
14 Jul 2022
When does Parameter-Efficient Transfer Learning Work for Machine
  Translation?
When does Parameter-Efficient Transfer Learning Work for Machine Translation?
A. Ustun
Asa Cooper Stickland
27
7
0
23 May 2022
Lifting the Curse of Multilinguality by Pre-training Modular
  Transformers
Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Jonas Pfeiffer
Naman Goyal
Xi Victoria Lin
Xian Li
James Cross
Sebastian Riedel
Mikel Artetxe
LRM
40
138
0
12 May 2022
Efficient Hierarchical Domain Adaptation for Pretrained Language Models
Efficient Hierarchical Domain Adaptation for Pretrained Language Models
Alexandra Chronopoulou
Matthew E. Peters
Jesse Dodge
23
42
0
16 Dec 2021
Continual Learning in Multilingual NMT via Language-Specific Embeddings
Continual Learning in Multilingual NMT via Language-Specific Embeddings
Alexandre Berard
CLL
23
20
0
20 Oct 2021
Multi-Way, Multilingual Neural Machine Translation with a Shared
  Attention Mechanism
Multi-Way, Multilingual Neural Machine Translation with a Shared Attention Mechanism
Orhan Firat
Kyunghyun Cho
Yoshua Bengio
LRM
AIMat
206
623
0
06 Jan 2016
1