ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1603.05544
  4. Cited By
Accelerating Deep Neural Network Training with Inconsistent Stochastic
  Gradient Descent
v1v2v3 (latest)

Accelerating Deep Neural Network Training with Inconsistent Stochastic Gradient Descent

17 March 2016
Linnan Wang
Yi Yang
Martin Renqiang Min
S. Chakradhar
ArXiv (abs)PDFHTML

Papers citing "Accelerating Deep Neural Network Training with Inconsistent Stochastic Gradient Descent"

18 / 18 papers shown
Title
Batch-FPM: Random batch-update multi-parameter physical Fourier
  ptychography neural network
Batch-FPM: Random batch-update multi-parameter physical Fourier ptychography neural networkIEEE Transactions on Computational Imaging (TCI), 2024
Ruiqing Sun
Delong Yang
Yiyan Su
Shaohui Zhang
Qun Hao
88
2
0
25 Aug 2024
Multiple Importance Sampling for Stochastic Gradient Estimation
Multiple Importance Sampling for Stochastic Gradient Estimation
Corentin Salaün
Xingchang Huang
Iliyan Georgiev
Niloy J. Mitra
Gurprit Singh
217
2
0
22 Jul 2024
Importance Sampling for Stochastic Gradient Descent in Deep Neural
  Networks
Importance Sampling for Stochastic Gradient Descent in Deep Neural Networks
Thibault Lahire
112
3
0
29 Mar 2023
PA&DA: Jointly Sampling PAth and DAta for Consistent NAS
PA&DA: Jointly Sampling PAth and DAta for Consistent NASComputer Vision and Pattern Recognition (CVPR), 2023
Shunong Lu
Yu Hu
Longxing Yang
Zihao Sun
Jilin Mei
Jianchao Tan
Chengru Song
149
13
0
28 Feb 2023
Large Batch Experience Replay
Large Batch Experience Replay
Thibault Lahire
Matthieu Geist
Emmanuel Rachelson
OffRL
229
16
0
04 Oct 2021
Block-term Tensor Neural Networks
Block-term Tensor Neural NetworksNeural Networks (NN), 2020
Jinmian Ye
Guangxi Li
Di Chen
Haiqin Yang
Shandian Zhe
Zenglin Xu
237
34
0
10 Oct 2020
Neural Network Retraining for Model Serving
Neural Network Retraining for Model Serving
Diego Klabjan
Xiaofeng Zhu
CLL
197
11
0
29 Apr 2020
Dynamic Stale Synchronous Parallel Distributed Training for Deep
  Learning
Dynamic Stale Synchronous Parallel Distributed Training for Deep LearningIEEE International Conference on Distributed Computing Systems (ICDCS), 2019
Xing Zhao
Aijun An
Junfeng Liu
Bin Chen
151
66
0
16 Aug 2019
Dual-branch residual network for lung nodule segmentation
Dual-branch residual network for lung nodule segmentationApplied Soft Computing (Appl Soft Comput), 2019
Haichao Cao
Hong Liu
E. Song
C. Hung
Guangzhi Ma
Xiangyang Xu
Renchao Jin
Jianguo Lu
105
123
0
21 May 2019
SuperNeurons: FFT-based Gradient Sparsification in the Distributed
  Training of Deep Neural Networks
SuperNeurons: FFT-based Gradient Sparsification in the Distributed Training of Deep Neural Networks
Linnan Wang
Wei Wu
Junyu Zhang
Hang Liu
G. Bosilca
Maurice Herlihy
Rodrigo Fonseca
GNN
91
5
0
21 Nov 2018
Compositional Stochastic Average Gradient for Machine Learning and
  Related Applications
Compositional Stochastic Average Gradient for Machine Learning and Related Applications
Tsung-Yu Hsieh
Y. El-Manzalawy
Yiwei Sun
Vasant Honavar
141
1
0
04 Sep 2018
Not All Samples Are Created Equal: Deep Learning with Importance
  Sampling
Not All Samples Are Created Equal: Deep Learning with Importance SamplingInternational Conference on Machine Learning (ICML), 2018
Angelos Katharopoulos
François Fleuret
322
596
0
02 Mar 2018
SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural
  Networks
SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Linnan Wang
Jinmian Ye
Yiyang Zhao
Wei Wu
Ang Li
Shuaiwen Leon Song
Zenglin Xu
Tim Kraska
3DH
184
286
0
13 Jan 2018
A multi-candidate electronic voting scheme with unlimited participants
A multi-candidate electronic voting scheme with unlimited participants
Xi Zhao
Yong Ding
Quanyu Zhao
51
1
0
29 Dec 2017
Learning Compact Recurrent Neural Networks with Block-Term Tensor
  Decomposition
Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Jinmian Ye
Linnan Wang
Guangxi Li
Di Chen
Shandian Zhe
Xinqi Chu
Zenglin Xu
220
140
0
14 Dec 2017
Biased Importance Sampling for Deep Neural Network Training
Biased Importance Sampling for Deep Neural Network Training
Angelos Katharopoulos
François Fleuret
197
74
0
31 May 2017
Efficient Communications in Training Large Scale Neural Networks
Linnan Wang
Wei Wu
G. Bosilca
R. Vuduc
Zenglin Xu
154
18
0
14 Nov 2016
BLASX: A High Performance Level-3 BLAS Library for Heterogeneous
  Multi-GPU Computing
BLASX: A High Performance Level-3 BLAS Library for Heterogeneous Multi-GPU Computing
Linnan Wang
Wei Wu
Jianxiong Xiao
Yezhou Yang
104
54
0
16 Oct 2015
1