Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.09470
Cited By
Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters
19 May 2022
Yang Xiang
Zhihua Wu
Weibao Gong
Siyu Ding
Xianjie Mo
Yuang Liu
Shuohuan Wang
Peng Liu
Yongshuai Hou
Long Li
Bin Wang
S. Shi
Yaqian Han
Yue Yu
Ge Li
Yu Sun
Yanjun Ma
Dianhai Yu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters"
5 / 5 papers shown
Title
Federated Learning Challenges and Opportunities: An Outlook
Jie Ding
Eric W. Tramel
Anit Kumar Sahu
Shuang Wu
Salman Avestimehr
Tao Zhang
FedML
20
53
0
01 Feb 2022
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators
Zhixing Tan
Xiangwen Zhang
Shuo Wang
Yang Liu
VLM
LRM
193
51
0
13 Oct 2021
COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Yu Meng
Chenyan Xiong
Payal Bajaj
Saurabh Tiwary
Paul N. Bennett
Jiawei Han
Xia Song
114
201
0
16 Feb 2021
ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora
Ouyang Xuan
Shuohuan Wang
Chao Pang
Yu Sun
Hao Tian
Hua-Hong Wu
Haifeng Wang
51
100
0
31 Dec 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
243
1,791
0
17 Sep 2019
1