ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.09696
  4. Cited By
Trainability of ReLU networks and Data-dependent Initialization
v1v2 (latest)

Trainability of ReLU networks and Data-dependent Initialization

23 July 2019
Yeonjong Shin
George Karniadakis
ArXiv (abs)PDFHTML

Papers citing "Trainability of ReLU networks and Data-dependent Initialization"

5 / 5 papers shown
Title
Training Thinner and Deeper Neural Networks: Jumpstart Regularization
Training Thinner and Deeper Neural Networks: Jumpstart Regularization
Carles Roger Riera Molina
Camilo Rey
Thiago Serra
Eloi Puertas
O. Pujol
61
4
0
30 Jan 2022
Probabilistic bounds on neuron death in deep rectifier networks
Probabilistic bounds on neuron death in deep rectifier networks
Blaine Rister
D. Rubin
43
1
0
13 Jul 2020
Non-convergence of stochastic gradient descent in the training of deep
  neural networks
Non-convergence of stochastic gradient descent in the training of deep neural networks
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
77
37
0
12 Jun 2020
Nearly Minimal Over-Parametrization of Shallow Neural Networks
Armin Eftekhari
Chaehwan Song
Volkan Cevher
52
1
0
09 Oct 2019
Generating Accurate Pseudo-labels in Semi-Supervised Learning and
  Avoiding Overconfident Predictions via Hermite Polynomial Activations
Generating Accurate Pseudo-labels in Semi-Supervised Learning and Avoiding Overconfident Predictions via Hermite Polynomial Activations
Vishnu Suresh Lokhande
Songwong Tasneeyapant
Abhay Venkatesh
Sathya Ravi
Vikas Singh
83
29
0
12 Sep 2019
1