
A proof of convergence for gradient descent in the training of
artificial neural networks for constant target functions
Journal of Complexity (JC), 2021
Papers citing "A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions"
12 / 12 papers shown
Title |
|---|
![]() Gradient descent provably escapes saddle points in the training of
shallow ReLU networksJournal of Optimization Theory and Applications (JOTA), 2022 |
![]() A proof of convergence for the gradient descent optimization method with
random initializations in the training of neural networks with ReLU
activation for piecewise linear target functionsJournal of machine learning research (JMLR), 2021 |
![]() Convergence analysis for gradient flows in the training of artificial
neural networks with ReLU activationJournal of Mathematical Analysis and Applications (JMAA), 2021 |
![]() A proof of convergence for stochastic gradient descent in the training
of artificial neural networks with ReLU activation for constant target
functionsZeitschrift für Angewandte Mathematik und Physik (ZAMP), 2021 |
![]() Landscape analysis for shallow neural networks: complete classification
of critical points for affine target functionsJournal of nonlinear science (J. Nonlinear Sci.), 2021 |











