ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.11878
  4. Cited By
Fractional moment-preserving initialization schemes for training deep
  neural networks
v1v2v3v4v5 (latest)

Fractional moment-preserving initialization schemes for training deep neural networks

25 May 2020
Mert Gurbuzbalaban
Yuanhan Hu
ArXiv (abs)PDFHTML

Papers citing "Fractional moment-preserving initialization schemes for training deep neural networks"

3 / 3 papers shown
Limit Theorems for Stochastic Gradient Descent with Infinite Variance
Limit Theorems for Stochastic Gradient Descent with Infinite Variance
Jose H. Blanchet
Aleksandar Mijatović
Wenhao Yang
421
1
0
21 Oct 2024
Convergence Rates of Stochastic Gradient Descent under Infinite Noise
  Variance
Convergence Rates of Stochastic Gradient Descent under Infinite Noise VarianceNeural Information Processing Systems (NeurIPS), 2021
Hongjian Wang
Mert Gurbuzbalaban
Lingjiong Zhu
Umut cSimcsekli
Murat A. Erdogdu
168
46
0
20 Feb 2021
Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections
Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise InjectionsInternational Conference on Machine Learning (ICML), 2021
A. Camuto
Xiaoyu Wang
Lingjiong Zhu
Chris Holmes
Mert Gurbuzbalaban
Umut Simsekli
184
16
0
13 Feb 2021
1
Page 1 of 1