ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.14514
  4. Cited By
Taming neural networks with TUSLA: Non-convex learning via adaptive
  stochastic gradient Langevin algorithms

Taming neural networks with TUSLA: Non-convex learning via adaptive stochastic gradient Langevin algorithms

25 June 2020
A. Lovas
Iosif Lytras
Miklós Rásonyi
Sotirios Sabanis
ArXivPDFHTML

Papers citing "Taming neural networks with TUSLA: Non-convex learning via adaptive stochastic gradient Langevin algorithms"

12 / 12 papers shown
Title
Non-convex sampling for a mixture of locally smooth potentials
Non-convex sampling for a mixture of locally smooth potentials
D. Nguyen
38
0
0
31 Jan 2023
Kinetic Langevin MCMC Sampling Without Gradient Lipschitz Continuity --
  the Strongly Convex Case
Kinetic Langevin MCMC Sampling Without Gradient Lipschitz Continuity -- the Strongly Convex Case
Tim Johnston
Iosif Lytras
Sotirios Sabanis
43
8
0
19 Jan 2023
Non-asymptotic convergence bounds for modified tamed unadjusted Langevin
  algorithm in non-convex setting
Non-asymptotic convergence bounds for modified tamed unadjusted Langevin algorithm in non-convex setting
Ariel Neufeld
Matthew Ng Cheng En
Ying Zhang
39
11
0
06 Jul 2022
Unadjusted Langevin algorithm for sampling a mixture of weakly smooth potentials
D. Nguyen
40
5
0
17 Dec 2021
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
38
6
0
13 Dec 2021
Statistical Finite Elements via Langevin Dynamics
Statistical Finite Elements via Langevin Dynamics
Ömer Deniz Akyildiz
Connor Duffin
Sotirios Sabanis
Mark Girolami
41
11
0
21 Oct 2021
A proof of convergence for the gradient descent optimization method with
  random initializations in the training of neural networks with ReLU
  activation for piecewise linear target functions
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions
Arnulf Jentzen
Adrian Riekert
40
13
0
10 Aug 2021
Polygonal Unadjusted Langevin Algorithms: Creating stable and efficient
  adaptive algorithms for neural networks
Polygonal Unadjusted Langevin Algorithms: Creating stable and efficient adaptive algorithms for neural networks
Dong-Young Lim
Sotirios Sabanis
44
11
0
28 May 2021
A proof of convergence for stochastic gradient descent in the training
  of artificial neural networks with ReLU activation for constant target
  functions
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
Arnulf Jentzen
Adrian Riekert
MLT
42
13
0
01 Apr 2021
Convergence rates for gradient descent in the training of
  overparameterized artificial neural networks with biases
Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases
Arnulf Jentzen
T. Kröger
ODL
35
7
0
23 Feb 2021
A proof of convergence for gradient descent in the training of
  artificial neural networks for constant target functions
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions
Patrick Cheridito
Arnulf Jentzen
Adrian Riekert
Florian Rossmannek
32
24
0
19 Feb 2021
Weak error analysis for stochastic gradient descent optimization
  algorithms
Weak error analysis for stochastic gradient descent optimization algorithms
A. Bercher
Lukas Gonon
Arnulf Jentzen
Diyora Salimova
36
4
0
03 Jul 2020
1