ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10395
  4. Cited By
Leader Stochastic Gradient Descent for Distributed Training of Deep
  Learning Models: Extension

Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models: Extension

24 May 2019
Yunfei Teng
Wenbo Gao
F. Chalus
A. Choromańska
D. Goldfarb
Adrian Weller
ArXivPDFHTML

Papers citing "Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models: Extension"

3 / 3 papers shown
Title
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,886
0
15 Sep 2016
Understanding symmetries in deep networks
Understanding symmetries in deep networks
Vijay Badrinarayanan
Bamdev Mishra
R. Cipolla
219
42
0
03 Nov 2015
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
175
1,185
0
30 Nov 2014
1