ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.02886
  4. Cited By
Nonlinear Conjugate Gradients For Scaling Synchronous Distributed DNN
  Training
v1v2 (latest)

Nonlinear Conjugate Gradients For Scaling Synchronous Distributed DNN Training

7 December 2018
Saurabh N. Adya
Vinay Palakkode
Oncel Tuzel
ArXiv (abs)PDFHTML

Papers citing "Nonlinear Conjugate Gradients For Scaling Synchronous Distributed DNN Training"

3 / 3 papers shown
Title
Study on the Large Batch Size Training of Neural Networks Based on the
  Second Order Gradient
Study on the Large Batch Size Training of Neural Networks Based on the Second Order Gradient
Fengli Gao
Huicai Zhong
ODL
30
10
0
16 Dec 2020
Stochastic Gradient Descent with Nonlinear Conjugate Gradient-Style
  Adaptive Momentum
Stochastic Gradient Descent with Nonlinear Conjugate Gradient-Style Adaptive Momentum
Bao Wang
Qiang Ye
ODL
93
14
0
03 Dec 2020
Faster SVM Training via Conjugate SMO
Faster SVM Training via Conjugate SMO
Alberto Torres-Barrán
Carlos M. Alaíz
José R. Dorronsoro
38
27
0
19 Mar 2020
1