ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.06341
  4. Cited By
Stochastic batch size for adaptive regularization in deep network
  optimization

Stochastic batch size for adaptive regularization in deep network optimization

14 April 2020
Kensuke Nakamura
Stefano Soatto
Byung-Woo Hong
    ODL
ArXivPDFHTML

Papers citing "Stochastic batch size for adaptive regularization in deep network optimization"

1 / 1 papers shown
Title
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,696
0
15 Sep 2016
1