ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.11990
  4. Cited By
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A
  Large-Scale Generative Language Model

Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model

28 January 2022
Shaden Smith
M. Patwary
Brandon Norick
P. LeGresley
Samyam Rajbhandari
Jared Casper
Zhun Liu
Shrimai Prabhumoye
George Zerveas
V. Korthikanti
Elton Zhang
R. Child
Reza Yazdani Aminabadi
J. Bernauer
Xia Song
M. Shoeybi
Yuxiong He
Michael Houston
Saurabh Tiwary
Bryan Catanzaro
    MoE
ArXivPDFHTML

Papers citing "Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model"

50 / 501 papers shown
Title
What Writing Assistants Can Learn from Programming IDEs
What Writing Assistants Can Learn from Programming IDEs
Sergey Titov
Agnia Sergeyuk
T. Bryksin
21
0
0
28 Mar 2023
ChatGPT4PCG Competition: Character-like Level Generation for Science
  Birds
ChatGPT4PCG Competition: Character-like Level Generation for Science Birds
Pittawat Taveekitworachai
Febri Abdullah
Mury F. Dewantoro
R. Thawonmas
Julian Togelius
Jochen Renz
21
17
0
28 Mar 2023
TransCODE: Co-design of Transformers and Accelerators for Efficient
  Training and Inference
TransCODE: Co-design of Transformers and Accelerators for Efficient Training and Inference
Shikhar Tuli
N. Jha
30
5
0
27 Mar 2023
On the Creativity of Large Language Models
On the Creativity of Large Language Models
Giorgio Franceschelli
Mirco Musolesi
64
51
0
27 Mar 2023
$k$NN Prompting: Beyond-Context Learning with Calibration-Free Nearest
  Neighbor Inference
kkkNN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference
Benfeng Xu
Quan Wang
Zhendong Mao
Yajuan Lyu
Qiaoqiao She
Yongdong Zhang
87
52
0
24 Mar 2023
EdgeTran: Co-designing Transformers for Efficient Inference on Mobile
  Edge Platforms
EdgeTran: Co-designing Transformers for Efficient Inference on Mobile Edge Platforms
Shikhar Tuli
N. Jha
34
3
0
24 Mar 2023
Large AI Models in Health Informatics: Applications, Challenges, and the
  Future
Large AI Models in Health Informatics: Applications, Challenges, and the Future
Jianing Qiu
Lin Li
Jiankai Sun
Jiachuan Peng
Peilun Shi
...
Bo Xiao
Wu Yuan
Ningli Wang
Dong Xu
Benny P. L. Lo
AI4MH
LM&MA
40
127
0
21 Mar 2023
eP-ALM: Efficient Perceptual Augmentation of Language Models
eP-ALM: Efficient Perceptual Augmentation of Language Models
Mustafa Shukor
Corentin Dancette
Matthieu Cord
MLLM
VLM
24
29
0
20 Mar 2023
What does it take to catch a Chinchilla? Verifying Rules on Large-Scale
  Neural Network Training via Compute Monitoring
What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring
Yonadav Shavit
18
21
0
20 Mar 2023
DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4
DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4
Zheng-Long Liu
Yue Huang
Xiao-Xing Yu
Lu Zhang
Zihao Wu
...
Dinggang Shen
Quanzheng Li
Tianming Liu
Dajiang Zhu
Xiang Li
LM&MA
MedIm
21
168
0
20 Mar 2023
PanGu-Σ: Towards Trillion Parameter Language Model with Sparse
  Heterogeneous Computing
PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing
Xiaozhe Ren
Pingyi Zhou
Xinfan Meng
Xinjing Huang
Yadao Wang
...
Jiansheng Wei
Xin Jiang
Teng Su
Qun Liu
Jun Yao
ALM
MoE
67
60
0
20 Mar 2023
SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language
  Models
SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models
Vithursan Thangarasa
Abhay Gupta
William Marshall
Tianda Li
Kevin Leong
D. DeCoste
Sean Lie
Shreyas Saxena
MoE
AI4CE
16
18
0
18 Mar 2023
MCR-DL: Mix-and-Match Communication Runtime for Deep Learning
MCR-DL: Mix-and-Match Communication Runtime for Deep Learning
Quentin G. Anthony
A. A. Awan
Jeff Rasley
Yuxiong He
A. Shafi
Mustafa Abduljabbar
Hari Subramoni
D. Panda
MoE
34
7
0
15 Mar 2023
ZeroQuant-V2: Exploring Post-training Quantization in LLMs from
  Comprehensive Study to Low Rank Compensation
ZeroQuant-V2: Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation
Z. Yao
Xiaoxia Wu
Cheng-rong Li
Stephen Youn
Yuxiong He
MQ
63
57
0
15 Mar 2023
A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize
  Mixture-of-Experts Training
A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training
Siddharth Singh
Olatunji Ruwase
A. A. Awan
Samyam Rajbhandari
Yuxiong He
A. Bhatele
MoE
32
30
0
11 Mar 2023
Extending the Pre-Training of BLOOM for Improved Support of Traditional
  Chinese: Models, Methods and Results
Extending the Pre-Training of BLOOM for Improved Support of Traditional Chinese: Models, Methods and Results
Philipp Ennen
Po-Chun Hsu
Chan-Jan Hsu
Chang-Le Liu
Yen-Chen Wu
Yin-Hsiang Liao
Chin-Tung Lin
Da-shan Shiu
Wei-Yun Ma
OSLM
VLM
AI4CE
38
10
0
08 Mar 2023
A Comprehensive Survey of AI-Generated Content (AIGC): A History of
  Generative AI from GAN to ChatGPT
A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT
Yihan Cao
Siyu Li
Yixin Liu
Zhiling Yan
Yutong Dai
Philip S. Yu
Lichao Sun
24
501
0
07 Mar 2023
The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset
The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset
Hugo Laurenccon
Lucile Saulnier
Thomas Wang
Christopher Akiki
Albert Villanova del Moral
...
Violette Lepercq
Suzana Ilić
Margaret Mitchell
Sasha Luccioni
Yacine Jernite
AI4CE
AILaw
36
163
0
07 Mar 2023
Towards Zero-Shot Functional Compositionality of Language Models
Towards Zero-Shot Functional Compositionality of Language Models
Hangyeol Yu
Myeongho Jeong
Jamin Shin
Hyeongdon Moon
Juneyoung Park
Seungtaek Choi
25
1
0
06 Mar 2023
Angel-PTM: A Scalable and Economical Large-scale Pre-training System in
  Tencent
Angel-PTM: A Scalable and Economical Large-scale Pre-training System in Tencent
Xiaonan Nie
Yi Liu
Fangcheng Fu
J. Xue
Dian Jiao
Xupeng Miao
Yangyu Tao
Bin Cui
MoE
19
16
0
06 Mar 2023
A Framework for Neurosymbolic Robot Action Planning using Large Language
  Models
A Framework for Neurosymbolic Robot Action Planning using Large Language Models
Alessio Capitanelli
Fulvio Mastrogiovanni
LM&Ro
LLMAG
24
7
0
01 Mar 2023
A Mixed-Methods Approach to Understanding User Trust after Voice
  Assistant Failures
A Mixed-Methods Approach to Understanding User Trust after Voice Assistant Failures
Amanda Baughan
Allison Mercurio
Ariel Liu
Xuezhi Wang
Jilin Chen
Xiao Ma
14
15
0
01 Mar 2023
AccelTran: A Sparsity-Aware Accelerator for Dynamic Inference with
  Transformers
AccelTran: A Sparsity-Aware Accelerator for Dynamic Inference with Transformers
Shikhar Tuli
N. Jha
20
31
0
28 Feb 2023
Language Is Not All You Need: Aligning Perception with Language Models
Language Is Not All You Need: Aligning Perception with Language Models
Shaohan Huang
Li Dong
Wenhui Wang
Y. Hao
Saksham Singhal
...
Johan Bjorck
Vishrav Chaudhary
Subhojit Som
Xia Song
Furu Wei
VLM
LRM
MLLM
19
534
0
27 Feb 2023
Full Stack Optimization of Transformer Inference: a Survey
Full Stack Optimization of Transformer Inference: a Survey
Sehoon Kim
Coleman Hooper
Thanakul Wattanawong
Minwoo Kang
Ruohan Yan
...
Qijing Huang
Kurt Keutzer
Michael W. Mahoney
Y. Shao
A. Gholami
MQ
28
100
0
27 Feb 2023
LLaMA: Open and Efficient Foundation Language Models
LLaMA: Open and Efficient Foundation Language Models
Hugo Touvron
Thibaut Lavril
Gautier Izacard
Xavier Martinet
Marie-Anne Lachaux
...
Faisal Azhar
Aurelien Rodriguez
Armand Joulin
Edouard Grave
Guillaume Lample
ALM
PILM
8
12,223
0
27 Feb 2023
Active Prompting with Chain-of-Thought for Large Language Models
Active Prompting with Chain-of-Thought for Large Language Models
Shizhe Diao
Pengcheng Wang
Yong Lin
Tong Zhang
ReLM
KELM
LLMAG
LRM
24
118
0
23 Feb 2023
Optical Transformers
Optical Transformers
Maxwell G. Anderson
Shifan Ma
Tianyu Wang
Logan G. Wright
Peter L. McMahon
12
20
0
20 Feb 2023
Deep Transformers without Shortcuts: Modifying Self-attention for
  Faithful Signal Propagation
Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation
Bobby He
James Martens
Guodong Zhang
Aleksandar Botev
Andy Brock
Samuel L. Smith
Yee Whye Teh
17
30
0
20 Feb 2023
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and
  Fine-tuned BERT
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
AI4MH
47
237
0
19 Feb 2023
Massively Multilingual Shallow Fusion with Large Language Models
Massively Multilingual Shallow Fusion with Large Language Models
Ke Hu
Tara N. Sainath
Bo-wen Li
Nan Du
Yanping Huang
Andrew M. Dai
Yu Zhang
Rodrigo Cabrera
Z. Chen
Trevor Strohman
22
13
0
17 Feb 2023
Auto-Parallelizing Large Models with Rhino: A Systematic Approach on
  Production AI Platform
Auto-Parallelizing Large Models with Rhino: A Systematic Approach on Production AI Platform
Shiwei Zhang
Lansong Diao
Siyu Wang
Zongyan Cao
Yiliang Gu
Chang Si
Ziji Shi
Zhen Zheng
Chuan Wu
W. Lin
AI4CE
22
4
0
16 Feb 2023
Speculative Decoding with Big Little Decoder
Speculative Decoding with Big Little Decoder
Sehoon Kim
K. Mangalam
Suhong Moon
Jitendra Malik
Michael W. Mahoney
A. Gholami
Kurt Keutzer
MoE
21
98
0
15 Feb 2023
Adding Instructions during Pretraining: Effective Way of Controlling
  Toxicity in Language Models
Adding Instructions during Pretraining: Effective Way of Controlling Toxicity in Language Models
Shrimai Prabhumoye
M. Patwary
M. Shoeybi
Bryan Catanzaro
LM&MA
22
19
0
14 Feb 2023
On the Planning Abilities of Large Language Models (A Critical
  Investigation with a Proposed Benchmark)
On the Planning Abilities of Large Language Models (A Critical Investigation with a Proposed Benchmark)
Karthik Valmeekam
S. Sreedharan
Matthew Marquez
Alberto Olmo Hernandez
Subbarao Kambhampati
LLMAG
LRM
12
70
0
13 Feb 2023
Transformer models: an introduction and catalog
Transformer models: an introduction and catalog
X. Amatriain
Ananth Sankar
Jie Bing
Praveen Kumar Bodigutla
Timothy J. Hazen
Michaeel Kazi
19
50
0
12 Feb 2023
Exploiting Sparsity in Pruned Neural Networks to Optimize Large Model
  Training
Exploiting Sparsity in Pruned Neural Networks to Optimize Large Model Training
Siddharth Singh
A. Bhatele
19
9
0
10 Feb 2023
In-Context Learning with Many Demonstration Examples
In-Context Learning with Many Demonstration Examples
Mukai Li
Shansan Gong
Jiangtao Feng
Yiheng Xu
Jinchao Zhang
Zhiyong Wu
Lingpeng Kong
32
32
0
09 Feb 2023
Offsite-Tuning: Transfer Learning without Full Model
Offsite-Tuning: Transfer Learning without Full Model
Guangxuan Xiao
Ji Lin
Song Han
35
67
0
09 Feb 2023
Revisiting Offline Compression: Going Beyond Factorization-based Methods
  for Transformer Language Models
Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Models
Mohammadreza Banaei
Klaudia Bałazy
Artur Kasymov
R. Lebret
Jacek Tabor
Karl Aberer
OffRL
11
0
0
08 Feb 2023
Is ChatGPT a General-Purpose Natural Language Processing Task Solver?
Is ChatGPT a General-Purpose Natural Language Processing Task Solver?
Chengwei Qin
Aston Zhang
Zhuosheng Zhang
Jiaao Chen
Michihiro Yasunaga
Diyi Yang
LM&MA
AI4MH
LRM
ELM
32
663
0
08 Feb 2023
Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories
Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories
Suyu Ge
Chenyan Xiong
Corby Rosset
Arnold Overwijk
Jiawei Han
Paul N. Bennett
VLM
33
6
0
07 Feb 2023
Computation vs. Communication Scaling for Future Transformers on Future
  Hardware
Computation vs. Communication Scaling for Future Transformers on Future Hardware
Suchita Pati
Shaizeen Aga
Mahzabeen Islam
Nuwan Jayasena
Matthew D. Sinclair
18
9
0
06 Feb 2023
STEP: Learning N:M Structured Sparsity Masks from Scratch with
  Precondition
STEP: Learning N:M Structured Sparsity Masks from Scratch with Precondition
Yucheng Lu
Shivani Agrawal
Suvinay Subramanian
Oleg Rybakov
Chris De Sa
Amir Yazdanbakhsh
9
16
0
02 Feb 2023
A Survey on Efficient Training of Transformers
A Survey on Efficient Training of Transformers
Bohan Zhuang
Jing Liu
Zizheng Pan
Haoyu He
Yuetian Weng
Chunhua Shen
18
47
0
02 Feb 2023
Grounding Language Models to Images for Multimodal Inputs and Outputs
Grounding Language Models to Images for Multimodal Inputs and Outputs
Jing Yu Koh
Ruslan Salakhutdinov
Daniel Fried
MLLM
23
117
0
31 Jan 2023
Partitioning Distributed Compute Jobs with Reinforcement Learning and
  Graph Neural Networks
Partitioning Distributed Compute Jobs with Reinforcement Learning and Graph Neural Networks
Christopher W. F. Parsonson
Zacharaya Shabka
Alessandro Ottino
G. Zervas
21
0
0
31 Jan 2023
UPop: Unified and Progressive Pruning for Compressing Vision-Language
  Transformers
UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers
Dachuan Shi
Chaofan Tao
Ying Jin
Zhendong Yang
Chun Yuan
Jiaqi Wang
VLM
ViT
23
38
0
31 Jan 2023
Understanding the Effectiveness of Very Large Language Models on Dialog
  Evaluation
Understanding the Effectiveness of Very Large Language Models on Dialog Evaluation
Jessica Huynh
Cathy Jiao
Prakhar Gupta
Shikib Mehri
Payal Bajaj
Vishrav Chaudhary
M. Eskénazi
ELM
LM&MA
15
15
0
27 Jan 2023
Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware
  Communication Compression
Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Jaeyong Song
Jinkyu Yim
Jaewon Jung
Hongsun Jang
H. Kim
Youngsok Kim
Jinho Lee
GNN
8
25
0
24 Jan 2023
Previous
123...10116789
Next