ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.02254
  4. Cited By
Theory of Deep Learning IIb: Optimization Properties of SGD

Theory of Deep Learning IIb: Optimization Properties of SGD

7 January 2018
Chiyuan Zhang
Q. Liao
Alexander Rakhlin
Brando Miranda
Noah Golowich
T. Poggio
    ODL
ArXivPDFHTML

Papers citing "Theory of Deep Learning IIb: Optimization Properties of SGD"

7 / 7 papers shown
Title
Global Convergence of SGD On Two Layer Neural Nets
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
18
5
0
20 Oct 2022
Multi-Objective Loss Balancing for Physics-Informed Deep Learning
Multi-Objective Loss Balancing for Physics-Informed Deep Learning
Rafael Bischof
M. Kraus
PINN
AI4CE
20
88
0
19 Oct 2021
A Random Matrix Theory Approach to Damping in Deep Learning
A Random Matrix Theory Approach to Damping in Deep Learning
Diego Granziol
Nicholas P. Baskerville
AI4CE
ODL
21
2
0
15 Nov 2020
Orthogonal Deep Neural Networks
Orthogonal Deep Neural Networks
K. Jia
Shuai Li
Yuxin Wen
Tongliang Liu
Dacheng Tao
26
131
0
15 May 2019
Deep Multi-View Learning using Neuron-Wise Correlation-Maximizing
  Regularizers
Deep Multi-View Learning using Neuron-Wise Correlation-Maximizing Regularizers
K. Jia
Jiehong Lin
Mingkui Tan
Dacheng Tao
3DV
17
32
0
25 Apr 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
18
307
0
15 Feb 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,886
0
15 Sep 2016
1