ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.03076
  4. Cited By
A Convergence Theory for SVGD in the Population Limit under Talagrand's
  Inequality T1
v1v2 (latest)

A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1

International Conference on Machine Learning (ICML), 2021
6 June 2021
Adil Salim
Lukang Sun
Peter Richtárik
ArXiv (abs)PDFHTML

Papers citing "A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1"

18 / 18 papers shown
Title
Gradient Flow Sampler-based Distributionally Robust Optimization
Gradient Flow Sampler-based Distributionally Robust Optimization
Zusen Xu
Jia Jie Zhu
112
0
0
29 Oct 2025
Adaptive Kernel Selection for Stein Variational Gradient Descent
Adaptive Kernel Selection for Stein Variational Gradient Descent
Moritz Melcher
Simon Weissmann
Ashia C. Wilson
Jakob Zech
173
0
0
02 Oct 2025
Improved Finite-Particle Convergence Rates for Stein Variational Gradient Descent
Improved Finite-Particle Convergence Rates for Stein Variational Gradient DescentInternational Conference on Learning Representations (ICLR), 2024
Sayan Banerjee
Krishnakumar Balasubramanian
Promit Ghosal
259
7
0
13 Sep 2024
Stein Variational Ergodic Search
Stein Variational Ergodic Search
Darrick Lee
Cameron J. Lerch
Fabio Ramos
Ian Abraham
178
5
0
17 Jun 2024
Long-time asymptotics of noisy SVGD outside the population limit
Long-time asymptotics of noisy SVGD outside the population limit
Victor Priser
Pascal Bianchi
Adil Salim
182
2
0
17 Jun 2024
Zeroth-Order Sampling Methods for Non-Log-Concave Distributions:
  Alleviating Metastability by Denoising Diffusion
Zeroth-Order Sampling Methods for Non-Log-Concave Distributions: Alleviating Metastability by Denoising Diffusion
Ye He
Kevin Rojas
Molei Tao
DiffM
357
17
0
27 Feb 2024
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Haeju Lee
Minchan Jeong
SeYoung Yun
Kee-Eung Kim
AAMLVPVLM
183
4
0
13 Feb 2024
Towards a Better Theoretical Understanding of Independent Subnetwork
  Training
Towards a Better Theoretical Understanding of Independent Subnetwork TrainingInternational Conference on Machine Learning (ICML), 2023
Egor Shulgin
Peter Richtárik
AI4CE
324
8
0
28 Jun 2023
Provably Fast Finite Particle Variants of SVGD via Virtual Particle
  Stochastic Approximation
Provably Fast Finite Particle Variants of SVGD via Virtual Particle Stochastic ApproximationNeural Information Processing Systems (NeurIPS), 2023
Aniket Das
Dheeraj M. Nagaraj
368
8
0
27 May 2023
Learning Rate Free Sampling in Constrained Domains
Learning Rate Free Sampling in Constrained Domains
Louis Sharrock
Lester W. Mackey
Christopher Nemeth
323
3
0
24 May 2023
Tuning-Free Maximum Likelihood Training of Latent Variable Models via
  Coin Betting
Tuning-Free Maximum Likelihood Training of Latent Variable Models via Coin BettingInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Louis Sharrock
Daniel Dodd
Christopher Nemeth
210
10
0
24 May 2023
Towards Understanding the Dynamics of Gaussian-Stein Variational
  Gradient Descent
Towards Understanding the Dynamics of Gaussian-Stein Variational Gradient DescentNeural Information Processing Systems (NeurIPS), 2023
Tianle Liu
Promit Ghosal
Krishnakumar Balasubramanian
Natesh S. Pillai
372
14
0
23 May 2023
Augmented Message Passing Stein Variational Gradient Descent
Augmented Message Passing Stein Variational Gradient Descent
Jiankui Zhou
Yue Qiu
154
0
0
18 May 2023
Forward-backward Gaussian variational inference via JKO in the
  Bures-Wasserstein Space
Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein SpaceInternational Conference on Machine Learning (ICML), 2023
Michael Diao
Krishnakumar Balasubramanian
Sinho Chewi
Adil Salim
BDL
127
35
0
10 Apr 2023
Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates
Coin Sampling: Gradient-Based Bayesian Inference without Learning RatesInternational Conference on Machine Learning (ICML), 2023
Louis Sharrock
Christopher Nemeth
BDL
311
9
0
26 Jan 2023
A Finite-Particle Convergence Rate for Stein Variational Gradient
  Descent
A Finite-Particle Convergence Rate for Stein Variational Gradient DescentNeural Information Processing Systems (NeurIPS), 2022
Jiaxin Shi
Lester W. Mackey
222
25
0
17 Nov 2022
Regularized Stein Variational Gradient Flow
Regularized Stein Variational Gradient Flow
Ye He
Krishnakumar Balasubramanian
Bharath K. Sriperumbudur
Jianfeng Lu
OT
175
13
0
15 Nov 2022
Sampling with Mollified Interaction Energy Descent
Sampling with Mollified Interaction Energy DescentInternational Conference on Learning Representations (ICLR), 2022
Lingxiao Li
Qiang Liu
Anna Korba
Mikhail Yurochkin
Justin Solomon
181
20
0
24 Oct 2022
1