ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.05465
  4. Cited By
Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo
  under local conditions for nonconvex optimization

Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo under local conditions for nonconvex optimization

13 February 2020
Ömer Deniz Akyildiz
Sotirios Sabanis
ArXivPDFHTML

Papers citing "Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo under local conditions for nonconvex optimization"

17 / 17 papers shown
Title
On theoretical guarantees and a blessing of dimensionality for nonconvex
  sampling
On theoretical guarantees and a blessing of dimensionality for nonconvex sampling
Martin Chak
35
2
0
12 Nov 2024
Statistical Finite Elements via Interacting Particle Langevin Dynamics
Statistical Finite Elements via Interacting Particle Langevin Dynamics
Alex Glyn-Davies
Connor Duffin
Ieva Kazlauskaite
Mark Girolami
O. Deniz Akyildiz
27
0
0
11 Sep 2024
Kinetic Interacting Particle Langevin Monte Carlo
Kinetic Interacting Particle Langevin Monte Carlo
Paul Felix Valsecchi Oliva
O. Deniz Akyildiz
18
4
0
08 Jul 2024
Proximal Interacting Particle Langevin Algorithms
Proximal Interacting Particle Langevin Algorithms
Paula Cordero Encinar
F. R. Crucinio
O. Deniz Akyildiz
18
3
0
20 Jun 2024
Subsampling Error in Stochastic Gradient Langevin Diffusions
Subsampling Error in Stochastic Gradient Langevin Diffusions
Kexin Jin
Chenguang Liu
J. Latz
15
0
0
23 May 2023
Kinetic Langevin MCMC Sampling Without Gradient Lipschitz Continuity --
  the Strongly Convex Case
Kinetic Langevin MCMC Sampling Without Gradient Lipschitz Continuity -- the Strongly Convex Case
Tim Johnston
Iosif Lytras
Sotirios Sabanis
15
8
0
19 Jan 2023
Global convergence of optimized adaptive importance samplers
Global convergence of optimized adaptive importance samplers
Ömer Deniz Akyildiz
9
7
0
02 Jan 2022
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
19
6
0
13 Dec 2021
Statistical Finite Elements via Langevin Dynamics
Statistical Finite Elements via Langevin Dynamics
Ömer Deniz Akyildiz
Connor Duffin
Sotirios Sabanis
Mark Girolami
13
11
0
21 Oct 2021
A proof of convergence for the gradient descent optimization method with
  random initializations in the training of neural networks with ReLU
  activation for piecewise linear target functions
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions
Arnulf Jentzen
Adrian Riekert
15
13
0
10 Aug 2021
Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian
  Monte Carlo
Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo
Vyacheslav Kungurtsev
Adam D. Cobb
T. Javidi
Brian Jalaian
40
4
0
15 Jul 2021
A proof of convergence for stochastic gradient descent in the training
  of artificial neural networks with ReLU activation for constant target
  functions
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
Arnulf Jentzen
Adrian Riekert
MLT
14
13
0
01 Apr 2021
Convergence rates for gradient descent in the training of
  overparameterized artificial neural networks with biases
Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases
Arnulf Jentzen
T. Kröger
ODL
12
7
0
23 Feb 2021
A proof of convergence for gradient descent in the training of
  artificial neural networks for constant target functions
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions
Patrick Cheridito
Arnulf Jentzen
Adrian Riekert
Florian Rossmannek
13
24
0
19 Feb 2021
Convergence of Langevin Monte Carlo in Chi-Squared and Renyi Divergence
Convergence of Langevin Monte Carlo in Chi-Squared and Renyi Divergence
Murat A. Erdogdu
Rasa Hosseinzadeh
Matthew Shunshi Zhang
73
41
0
22 Jul 2020
Multi-index Antithetic Stochastic Gradient Algorithm
Multi-index Antithetic Stochastic Gradient Algorithm
Mateusz B. Majka
Marc Sabate Vidales
Łukasz Szpruch
19
0
0
10 Jun 2020
Convergence rates for optimised adaptive importance samplers
Convergence rates for optimised adaptive importance samplers
Ömer Deniz Akyildiz
Joaquín Míguez
8
30
0
28 Mar 2019
1