ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1701.06538
  4. Cited By
Outrageously Large Neural Networks: The Sparsely-Gated
  Mixture-of-Experts Layer

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

23 January 2017
Noam M. Shazeer
Azalia Mirhoseini
Krzysztof Maziarz
Andy Davis
Quoc V. Le
Geoffrey E. Hinton
J. Dean
    MoE
ArXivPDFHTML

Papers citing "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer"

50 / 495 papers shown
Title
ST-MoE: Designing Stable and Transferable Sparse Expert Models
ST-MoE: Designing Stable and Transferable Sparse Expert Models
Barret Zoph
Irwan Bello
Sameer Kumar
Nan Du
Yanping Huang
J. Dean
Noam M. Shazeer
W. Fedus
MoE
24
181
0
17 Feb 2022
Compute Trends Across Three Eras of Machine Learning
Compute Trends Across Three Eras of Machine Learning
J. Sevilla
Lennart Heim
A. Ho
T. Besiroglu
Marius Hobbhahn
Pablo Villalobos
30
269
0
11 Feb 2022
Efficient Direct-Connect Topologies for Collective Communications
Efficient Direct-Connect Topologies for Collective Communications
Liangyu Zhao
Siddharth Pal
Tapan Chugh
Weiyang Wang
Jason Fantl
P. Basu
J. Khoury
Arvind Krishnamurthy
24
6
0
07 Feb 2022
Unified Scaling Laws for Routed Language Models
Unified Scaling Laws for Routed Language Models
Aidan Clark
Diego de Las Casas
Aurelia Guy
A. Mensch
Michela Paganini
...
Oriol Vinyals
Jack W. Rae
Erich Elsen
Koray Kavukcuoglu
Karen Simonyan
MoE
27
177
0
02 Feb 2022
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A
  Large-Scale Generative Language Model
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model
Shaden Smith
M. Patwary
Brandon Norick
P. LeGresley
Samyam Rajbhandari
...
M. Shoeybi
Yuxiong He
Michael Houston
Saurabh Tiwary
Bryan Catanzaro
MoE
90
730
0
28 Jan 2022
Neural-FST Class Language Model for End-to-End Speech Recognition
Neural-FST Class Language Model for End-to-End Speech Recognition
A. Bruguier
Duc Le
Rohit Prabhavalkar
Dangna Li
Zhe Liu
Bo Wang
Eun Chang
Fuchun Peng
Ozlem Kalinli
M. Seltzer
15
6
0
28 Jan 2022
One Student Knows All Experts Know: From Sparse to Dense
One Student Knows All Experts Know: From Sparse to Dense
Fuzhao Xue
Xiaoxin He
Xiaozhe Ren
Yuxuan Lou
Yang You
MoMe
MoE
29
20
0
26 Jan 2022
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to
  Power Next-Generation AI Scale
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale
Samyam Rajbhandari
Conglong Li
Z. Yao
Minjia Zhang
Reza Yazdani Aminabadi
A. A. Awan
Jeff Rasley
Yuxiong He
35
284
0
14 Jan 2022
Glance and Focus Networks for Dynamic Visual Recognition
Glance and Focus Networks for Dynamic Visual Recognition
Gao Huang
Yulin Wang
Kangchen Lv
Haojun Jiang
Wenhui Huang
Pengfei Qi
S. Song
3DH
79
49
0
09 Jan 2022
EvoMoE: An Evolutional Mixture-of-Experts Training Framework via
  Dense-To-Sparse Gate
EvoMoE: An Evolutional Mixture-of-Experts Training Framework via Dense-To-Sparse Gate
Xiaonan Nie
Xupeng Miao
Shijie Cao
Lingxiao Ma
Qibin Liu
Jilong Xue
Youshan Miao
Yi Liu
Zhi-Xin Yang
Tengjiao Wang
MoMe
MoE
23
22
0
29 Dec 2021
Efficient Large Scale Language Modeling with Mixtures of Experts
Efficient Large Scale Language Modeling with Mixtures of Experts
Mikel Artetxe
Shruti Bhosale
Naman Goyal
Todor Mihaylov
Myle Ott
...
Jeff Wang
Luke Zettlemoyer
Mona T. Diab
Zornitsa Kozareva
Ves Stoyanov
MoE
61
188
0
20 Dec 2021
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
43
16
0
05 Dec 2021
Transformer-S2A: Robust and Efficient Speech-to-Animation
Transformer-S2A: Robust and Efficient Speech-to-Animation
Liyang Chen
Zhiyong Wu
Jun Ling
Runnan Li
Xu Tan
Sheng Zhao
27
18
0
18 Nov 2021
Achieving Human Parity on Visual Question Answering
Achieving Human Parity on Visual Question Answering
Ming Yan
Haiyang Xu
Chenliang Li
Junfeng Tian
Bin Bi
...
Ji Zhang
Songfang Huang
Fei Huang
Luo Si
Rong Jin
32
12
0
17 Nov 2021
Are we ready for a new paradigm shift? A Survey on Visual Deep MLP
Are we ready for a new paradigm shift? A Survey on Visual Deep MLP
Ruiyang Liu
Hai-Tao Zheng
Li Tao
Dun Liang
Haitao Zheng
85
97
0
07 Nov 2021
Data-Driven Offline Optimization For Architecting Hardware Accelerators
Data-Driven Offline Optimization For Architecting Hardware Accelerators
Aviral Kumar
Amir Yazdanbakhsh
Milad Hashemi
Kevin Swersky
Sergey Levine
27
36
0
20 Oct 2021
GNN-LM: Language Modeling based on Global Contexts via GNN
GNN-LM: Language Modeling based on Global Contexts via GNN
Yuxian Meng
Shi Zong
Xiaoya Li
Xiaofei Sun
Tianwei Zhang
Fei Wu
Jiwei Li
LRM
24
37
0
17 Oct 2021
Towards More Effective and Economic Sparsely-Activated Model
Towards More Effective and Economic Sparsely-Activated Model
Hao Jiang
Ke Zhan
Jianwei Qu
Yongkang Wu
Zhaoye Fei
...
Enrui Hu
Yinxia Zhang
Yantao Jia
Fan Yu
Bo Zhao
MoE
152
12
0
14 Oct 2021
Dynamic Inference with Neural Interpreters
Dynamic Inference with Neural Interpreters
Nasim Rahaman
Muhammad Waleed Gondal
S. Joshi
Peter V. Gehler
Yoshua Bengio
Francesco Locatello
Bernhard Schölkopf
34
31
0
12 Oct 2021
Taming Sparsely Activated Transformer with Stochastic Experts
Taming Sparsely Activated Transformer with Stochastic Experts
Simiao Zuo
Xiaodong Liu
Jian Jiao
Young Jin Kim
Hany Hassan
Ruofei Zhang
T. Zhao
Jianfeng Gao
MoE
39
108
0
08 Oct 2021
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
Zhengyan Zhang
Yankai Lin
Zhiyuan Liu
Peng Li
Maosong Sun
Jie Zhou
MoE
27
117
0
05 Oct 2021
Beyond Distillation: Task-level Mixture-of-Experts for Efficient
  Inference
Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Sneha Kudugunta
Yanping Huang
Ankur Bapna
M. Krikun
Dmitry Lepikhin
Minh-Thang Luong
Orhan Firat
MoE
119
106
0
24 Sep 2021
Personalized Federated Learning for Heterogeneous Clients with Clustered
  Knowledge Transfer
Personalized Federated Learning for Heterogeneous Clients with Clustered Knowledge Transfer
Yae Jee Cho
Jianyu Wang
Tarun Chiruvolu
Gauri Joshi
FedML
35
30
0
16 Sep 2021
Universal Simultaneous Machine Translation with Mixture-of-Experts
  Wait-k Policy
Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy
Shaolei Zhang
Yang Feng
MoE
25
39
0
11 Sep 2021
Go Wider Instead of Deeper
Go Wider Instead of Deeper
Fuzhao Xue
Ziji Shi
Futao Wei
Yuxuan Lou
Yong Liu
Yang You
ViT
MoE
25
80
0
25 Jul 2021
Mixed SIGNals: Sign Language Production via a Mixture of Motion
  Primitives
Mixed SIGNals: Sign Language Production via a Mixture of Motion Primitives
Ben Saunders
Necati Cihan Camgöz
Richard Bowden
SLR
27
50
0
23 Jul 2021
Discrete-Valued Neural Communication
Discrete-Valued Neural Communication
Dianbo Liu DianboLiu
Alex Lamb
Kenji Kawaguchi
Anirudh Goyal
Chen Sun
Michael C. Mozer
Yoshua Bengio
23
50
0
06 Jul 2021
A visual introduction to Gaussian Belief Propagation
A visual introduction to Gaussian Belief Propagation
Joseph Ortiz
Talfan Evans
Andrew J. Davison
15
32
0
05 Jul 2021
Zoo-Tuning: Adaptive Transfer from a Zoo of Models
Zoo-Tuning: Adaptive Transfer from a Zoo of Models
Yang Shu
Zhi Kou
Zhangjie Cao
Jianmin Wang
Mingsheng Long
26
44
0
29 Jun 2021
On component interactions in two-stage recommender systems
On component interactions in two-stage recommender systems
Jiri Hron
K. Krauth
Michael I. Jordan
Niki Kilbertus
CML
LRM
40
31
0
28 Jun 2021
Deep Ensembling with No Overhead for either Training or Testing: The
  All-Round Blessings of Dynamic Sparsity
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
OOD
31
49
0
28 Jun 2021
Recurrent Neural Network from Adder's Perspective: Carry-lookahead RNN
Recurrent Neural Network from Adder's Perspective: Carry-lookahead RNN
Haowei Jiang
Fei-wei Qin
Jin Cao
Yong Peng
Yanli Shao
LRM
ODL
16
42
0
22 Jun 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
40
815
0
14 Jun 2021
Scaling Vision with Sparse Mixture of Experts
Scaling Vision with Sparse Mixture of Experts
C. Riquelme
J. Puigcerver
Basil Mustafa
Maxim Neumann
Rodolphe Jenatton
André Susano Pinto
Daniel Keysers
N. Houlsby
MoE
17
575
0
10 Jun 2021
A Survey of Transformers
A Survey of Transformers
Tianyang Lin
Yuxin Wang
Xiangyang Liu
Xipeng Qiu
ViT
53
1,088
0
08 Jun 2021
ERNIE-Tiny : A Progressive Distillation Framework for Pretrained
  Transformer Compression
ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression
Weiyue Su
Xuyi Chen
Shi Feng
Jiaxiang Liu
Weixin Liu
Yu Sun
Hao Tian
Hua-Hong Wu
Haifeng Wang
28
13
0
04 Jun 2021
Fighting Gradients with Gradients: Dynamic Defenses against Adversarial
  Attacks
Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks
Dequan Wang
An Ju
Evan Shelhamer
David A. Wagner
Trevor Darrell
AAML
26
26
0
18 May 2021
GSPMD: General and Scalable Parallelization for ML Computation Graphs
GSPMD: General and Scalable Parallelization for ML Computation Graphs
Yuanzhong Xu
HyoukJoong Lee
Dehao Chen
Blake A. Hechtman
Yanping Huang
...
Noam M. Shazeer
Shibo Wang
Tao Wang
Yonghui Wu
Zhifeng Chen
MoE
28
127
0
10 May 2021
BasisNet: Two-stage Model Synthesis for Efficient Inference
BasisNet: Two-stage Model Synthesis for Efficient Inference
Mingda Zhang
Chun-Te Chu
A. Zhmoginov
Andrew G. Howard
Brendan Jou
Yukun Zhu
Li Zhang
R. Hwa
Adriana Kovashka
3DH
31
7
0
07 May 2021
PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language
  Models with Auto-parallel Computation
PanGu-ααα: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
Wei Zeng
Xiaozhe Ren
Teng Su
Hui Wang
Yi-Lun Liao
...
Gaojun Fan
Yaowei Wang
Xuefeng Jin
Qun Liu
Yonghong Tian
ALM
MoE
AI4CE
32
212
0
26 Apr 2021
A novel time-frequency Transformer based on self-attention mechanism and
  its application in fault diagnosis of rolling bearings
A novel time-frequency Transformer based on self-attention mechanism and its application in fault diagnosis of rolling bearings
Yifei Ding
M. Jia
Qiuhua Miao
Yudong Cao
16
268
0
19 Apr 2021
Graph Classification by Mixture of Diverse Experts
Graph Classification by Mixture of Diverse Experts
Fenyu Hu
Liping Wang
Shu Wu
Liang Wang
Tieniu Tan
42
10
0
29 Mar 2021
FastMoE: A Fast Mixture-of-Expert Training System
FastMoE: A Fast Mixture-of-Expert Training System
Jiaao He
J. Qiu
Aohan Zeng
Zhilin Yang
Jidong Zhai
Jie Tang
ALM
MoE
24
94
0
24 Mar 2021
Enhancing Transformer for Video Understanding Using Gated Multi-Level
  Attention and Temporal Adversarial Training
Enhancing Transformer for Video Understanding Using Gated Multi-Level Attention and Temporal Adversarial Training
Saurabh Sahu
Palash Goyal
ViT
29
2
0
18 Mar 2021
Contextual Dropout: An Efficient Sample-Dependent Dropout Module
Contextual Dropout: An Efficient Sample-Dependent Dropout Module
Xinjie Fan
Shujian Zhang
Korawat Tanwisuth
Xiaoning Qian
Mingyuan Zhou
OOD
BDL
UQCV
30
27
0
06 Mar 2021
Neural Production Systems: Learning Rule-Governed Visual Dynamics
Neural Production Systems: Learning Rule-Governed Visual Dynamics
Anirudh Goyal
Aniket Didolkar
Nan Rosemary Ke
Charles Blundell
Philippe Beaudoin
N. Heess
Michael C. Mozer
Yoshua Bengio
OCL
50
82
0
02 Mar 2021
LambdaNetworks: Modeling Long-Range Interactions Without Attention
LambdaNetworks: Modeling Long-Range Interactions Without Attention
Irwan Bello
278
179
0
17 Feb 2021
Switch Transformers: Scaling to Trillion Parameter Models with Simple
  and Efficient Sparsity
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
W. Fedus
Barret Zoph
Noam M. Shazeer
MoE
11
2,075
0
11 Jan 2021
Meta Learning Backpropagation And Improving It
Meta Learning Backpropagation And Improving It
Louis Kirsch
Jürgen Schmidhuber
51
56
0
29 Dec 2020
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
37
93
0
04 Nov 2020
Previous
123...10789
Next