ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.04289
  4. Cited By
Stochastic Gradient Descent as Approximate Bayesian Inference
v1v2 (latest)

Stochastic Gradient Descent as Approximate Bayesian Inference

13 April 2017
Stephan Mandt
Matthew D. Hoffman
David M. Blei
    BDL
ArXiv (abs)PDFHTML

Papers citing "Stochastic Gradient Descent as Approximate Bayesian Inference"

50 / 327 papers shown
Title
Drawing Multiple Augmentation Samples Per Image During Training
  Efficiently Decreases Test Error
Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Stanislav Fort
Andrew Brock
Razvan Pascanu
Soham De
Samuel L. Smith
64
32
0
27 May 2021
Calibration and Uncertainty Quantification of Bayesian Convolutional
  Neural Networks for Geophysical Applications
Calibration and Uncertainty Quantification of Bayesian Convolutional Neural Networks for Geophysical Applications
L. Mosser
E. Naeini
UQCVBDL
19
0
0
25 May 2021
Properties of the After Kernel
Properties of the After Kernel
Philip M. Long
66
29
0
21 May 2021
Scalable Bayesian Approach for the DINA Q-matrix Estimation Combining
  Stochastic Optimization and Variational Inference
Scalable Bayesian Approach for the DINA Q-matrix Estimation Combining Stochastic Optimization and Variational Inference
Motonori Oka
Kensuke Okada
44
6
0
20 May 2021
On the Distributional Properties of Adaptive Gradients
On the Distributional Properties of Adaptive Gradients
Z. Zhiyi
Liu Ziyin
40
4
0
15 May 2021
An Effective Baseline for Robustness to Distributional Shift
An Effective Baseline for Robustness to Distributional Shift
S. Thulasidasan
Sushil Thapa
S. Dhaubhadel
Gopinath Chennupati
Tanmoy Bhattacharya
J. Bilmes
OODOODD
70
27
0
15 May 2021
What Are Bayesian Neural Network Posteriors Really Like?
What Are Bayesian Neural Network Posteriors Really Like?
Pavel Izmailov
Sharad Vikram
Matthew D. Hoffman
A. Wilson
UQCVBDL
81
389
0
29 Apr 2021
Discriminative Bayesian filtering lends momentum to the stochastic
  Newton method for minimizing log-convex functions
Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions
Michael C. Burkhart
48
0
0
27 Apr 2021
SGD Implicitly Regularizes Generalization Error
SGD Implicitly Regularizes Generalization Error
Daniel A. Roberts
MLT
62
15
0
10 Apr 2021
Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to
  Improve Generalization
Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization
Zeke Xie
Li-xin Yuan
Zhanxing Zhu
Masashi Sugiyama
123
29
0
31 Mar 2021
LiBRe: A Practical Bayesian Approach to Adversarial Detection
LiBRe: A Practical Bayesian Approach to Adversarial Detection
Zhijie Deng
Xiao Yang
Shizhen Xu
Hang Su
Jun Zhu
BDLAAML
81
62
0
27 Mar 2021
Sampling-free Variational Inference for Neural Networks with
  Multiplicative Activation Noise
Sampling-free Variational Inference for Neural Networks with Multiplicative Activation Noise
Jannik Schmitt
Stefan Roth
UQCV
47
6
0
15 Mar 2021
On the Validity of Modeling SGD with Stochastic Differential Equations
  (SDEs)
On the Validity of Modeling SGD with Stochastic Differential Equations (SDEs)
Zhiyuan Li
Sadhika Malladi
Sanjeev Arora
104
80
0
24 Feb 2021
Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between
  Convergence and Power Transfer
Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between Convergence and Power Transfer
Qunsong Zeng
Yuqing Du
Kaibin Huang
92
37
0
24 Feb 2021
Scalable nonparametric Bayesian learning for heterogeneous and dynamic
  velocity fields
Scalable nonparametric Bayesian learning for heterogeneous and dynamic velocity fields
Sunrit Chakraborty
Aritra Guha
Rayleigh Lei
X. Nguyen
8
1
0
15 Feb 2021
Goal-oriented adaptive sampling under random field modelling of response
  probability distributions
Goal-oriented adaptive sampling under random field modelling of response probability distributions
Athénais Gautier
D. Ginsbourger
G. Pirot
33
2
0
15 Feb 2021
Strength of Minibatch Noise in SGD
Strength of Minibatch Noise in SGD
Liu Ziyin
Kangqiao Liu
Takashi Mori
Masakuni Ueda
ODLMLT
64
35
0
10 Feb 2021
On the Origin of Implicit Regularization in Stochastic Gradient Descent
On the Origin of Implicit Regularization in Stochastic Gradient Descent
Samuel L. Smith
Benoit Dherin
David Barrett
Soham De
MLT
62
204
0
28 Jan 2021
Estimating informativeness of samples with Smooth Unique Information
Estimating informativeness of samples with Smooth Unique Information
Hrayr Harutyunyan
Alessandro Achille
Giovanni Paolini
Orchid Majumder
Avinash Ravichandran
Rahul Bhotika
Stefano Soatto
95
25
0
17 Jan 2021
A Bayesian neural network predicts the dissolution of compact planetary
  systems
A Bayesian neural network predicts the dissolution of compact planetary systems
M. Cranmer
Daniel Tamayo
H. Rein
Peter W. Battaglia
S. Hadden
P. Armitage
S. Ho
D. Spergel
BDL
112
28
0
11 Jan 2021
The shifted ODE method for underdamped Langevin MCMC
The shifted ODE method for underdamped Langevin MCMC
James Foster
Terry Lyons
Harald Oberhauser
93
16
0
10 Jan 2021
Robustness, Privacy, and Generalization of Adversarial Training
Robustness, Privacy, and Generalization of Adversarial Training
Fengxiang He
Shaopeng Fu
Bohan Wang
Dacheng Tao
119
10
0
25 Dec 2020
Recent advances in deep learning theory
Recent advances in deep learning theory
Fengxiang He
Dacheng Tao
AI4CE
130
51
0
20 Dec 2020
Improved Image Matting via Real-time User Clicks and Uncertainty
  Estimation
Improved Image Matting via Real-time User Clicks and Uncertainty Estimation
Tianyi Wei
Dongdong Chen
Wenbo Zhou
Jing Liao
Hanqing Zhao
Weiming Zhang
Nenghai Yu
61
29
0
15 Dec 2020
Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning
  Dynamics
Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics
D. Kunin
Javier Sagastuy-Breña
Surya Ganguli
Daniel L. K. Yamins
Hidenori Tanaka
167
80
0
08 Dec 2020
Noise and Fluctuation of Finite Learning Rate Stochastic Gradient
  Descent
Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
Kangqiao Liu
Liu Ziyin
Masakuni Ueda
MLT
149
39
0
07 Dec 2020
A Review and Comparative Study on Probabilistic Object Detection in
  Autonomous Driving
A Review and Comparative Study on Probabilistic Object Detection in Autonomous Driving
Di Feng
Ali Harakeh
Steven Waslander
Klaus C. J. Dietmayer
AAMLUQCVEDL
126
227
0
20 Nov 2020
Variational Laplace for Bayesian neural networks
Variational Laplace for Bayesian neural networks
Ali Unlu
Laurence Aitchison
UQCVBDL
37
0
0
20 Nov 2020
Efficient and Transferable Adversarial Examples from Bayesian Neural
  Networks
Efficient and Transferable Adversarial Examples from Bayesian Neural Networks
Martin Gubri
Maxime Cordy
Mike Papadakis
Yves Le Traon
Koushik Sen
AAML
151
11
0
10 Nov 2020
On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint
  Sampling Method
On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method
Ye He
Krishnakumar Balasubramanian
Murat A. Erdogdu
66
35
0
06 Nov 2020
A Bayesian Perspective on Training Speed and Model Selection
A Bayesian Perspective on Training Speed and Model Selection
Clare Lyle
Lisa Schut
Binxin Ru
Y. Gal
Mark van der Wilk
99
24
0
27 Oct 2020
Deep Learning is Singular, and That's Good
Deep Learning is Singular, and That's Good
Daniel Murfet
Susan Wei
Biwei Huang
Hui Li
Jesse Gell-Redman
T. Quella
UQCV
79
29
0
22 Oct 2020
Federated Learning via Posterior Averaging: A New Perspective and
  Practical Algorithms
Federated Learning via Posterior Averaging: A New Perspective and Practical Algorithms
Maruan Al-Shedivat
Jennifer Gillenwater
Eric Xing
Afshin Rostamizadeh
FedML
126
112
0
11 Oct 2020
Small Data, Big Decisions: Model Selection in the Small-Data Regime
Small Data, Big Decisions: Model Selection in the Small-Data Regime
J. Bornschein
Francesco Visin
Simon Osindero
63
40
0
26 Sep 2020
Implicit Gradient Regularization
Implicit Gradient Regularization
David Barrett
Benoit Dherin
101
152
0
23 Sep 2020
Extending Label Smoothing Regularization with Self-Knowledge
  Distillation
Extending Label Smoothing Regularization with Self-Knowledge Distillation
Jiyue Wang
Pei Zhang
Wenjie Pang
Jie Li
16
0
0
11 Sep 2020
Ramifications of Approximate Posterior Inference for Bayesian Deep
  Learning in Adversarial and Out-of-Distribution Settings
Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings
John Mitros
A. Pakrashi
Brian Mac Namee
UQCV
98
2
0
03 Sep 2020
Robust, Accurate Stochastic Optimization for Variational Inference
Robust, Accurate Stochastic Optimization for Variational Inference
Akash Kumar Dhaka
Alejandro Catalina
Michael Riis Andersen
Maans Magnusson
Jonathan H. Huggins
Aki Vehtari
71
34
0
01 Sep 2020
Learning explanations that are hard to vary
Learning explanations that are hard to vary
Giambattista Parascandolo
Alexander Neitz
Antonio Orvieto
Luigi Gresele
Bernhard Schölkopf
FAtt
86
187
0
01 Sep 2020
Noise-induced degeneration in online learning
Noise-induced degeneration in online learning
Yuzuru Sato
Daiji Tsutsui
A. Fujiwara
58
2
0
24 Aug 2020
Intelligence plays dice: Stochasticity is essential for machine learning
Intelligence plays dice: Stochasticity is essential for machine learning
M. Sabuncu
127
6
0
17 Aug 2020
Efficient hyperparameter optimization by way of PAC-Bayes bound
  minimization
Efficient hyperparameter optimization by way of PAC-Bayes bound minimization
John J. Cherian
Andrew G. Taube
R. McGibbon
Panagiotis Angelikopoulos
Guy Blanc
M. Snarski
D. D. Richman
J. L. Klepeis
D. Shaw
33
6
0
14 Aug 2020
Neural networks with late-phase weights
Neural networks with late-phase weights
J. Oswald
Seijin Kobayashi
Alexander Meulemans
Christian Henning
Benjamin Grewe
João Sacramento
94
35
0
25 Jul 2020
Fast Learning for Renewal Optimization in Online Task Scheduling
Fast Learning for Renewal Optimization in Online Task Scheduling
M. Neely
102
13
0
18 Jul 2020
Tighter Generalization Bounds for Iterative Differentially Private
  Learning Algorithms
Tighter Generalization Bounds for Iterative Differentially Private Learning Algorithms
Fengxiang He
Bohan Wang
Dacheng Tao
FedML
55
18
0
18 Jul 2020
On stochastic mirror descent with interacting particles: convergence
  properties and variance reduction
On stochastic mirror descent with interacting particles: convergence properties and variance reduction
Anastasia Borovykh
N. Kantas
P. Parpas
G. Pavliotis
55
12
0
15 Jul 2020
Hands-on Bayesian Neural Networks -- a Tutorial for Deep Learning Users
Hands-on Bayesian Neural Networks -- a Tutorial for Deep Learning Users
Laurent Valentin Jospin
Wray Buntine
F. Boussaïd
Hamid Laga
Bennamoun
OODBDLUQCV
93
634
0
14 Jul 2020
Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate
  and Momentum
Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum
Zeke Xie
Xinrui Wang
Huishuai Zhang
Issei Sato
Masashi Sugiyama
ODL
101
47
0
29 Jun 2020
Is SGD a Bayesian sampler? Well, almost
Is SGD a Bayesian sampler? Well, almost
Chris Mingard
Guillermo Valle Pérez
Joar Skalse
A. Louis
BDL
77
53
0
26 Jun 2020
On the Generalization Benefit of Noise in Stochastic Gradient Descent
On the Generalization Benefit of Noise in Stochastic Gradient Descent
Samuel L. Smith
Erich Elsen
Soham De
MLT
62
100
0
26 Jun 2020
Previous
1234567
Next