ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.07540
  4. Cited By
Bayesian filtering unifies adaptive and non-adaptive neural network
  optimization methods
v1v2v3v4v5 (latest)

Bayesian filtering unifies adaptive and non-adaptive neural network optimization methods

19 July 2018
Laurence Aitchison
    ODL
ArXiv (abs)PDFHTML

Papers citing "Bayesian filtering unifies adaptive and non-adaptive neural network optimization methods"

11 / 11 papers shown
Title
Are vision language models robust to uncertain inputs?
Are vision language models robust to uncertain inputs?
Xi Wang
Eric Nalisnick
AAMLVLM
Presented at ResearchTrend Connect | VLM on 18 Jun 2025
144
1
0
17 May 2025
Implicit Maximum a Posteriori Filtering via Adaptive Optimization
Implicit Maximum a Posteriori Filtering via Adaptive Optimization
Gianluca M Bencomo
Jake C. Snell
Thomas L. Griffiths
419
4
0
17 Nov 2023
On Sequential Bayesian Inference for Continual Learning
On Sequential Bayesian Inference for Continual Learning
Samuel Kessler
Adam D. Cobb
Tim G. J. Rudner
S. Zohren
Stephen J. Roberts
CLLBDL
116
7
0
04 Jan 2023
Hebbian Deep Learning Without Feedback
Hebbian Deep Learning Without Feedback
Adrien Journé
Hector Garcia Rodriguez
Qinghai Guo
Timoleon Moraitis
AAML
91
54
0
23 Sep 2022
A Comprehensive Review of Digital Twin -- Part 1: Modeling and Twinning
  Enabling Technologies
A Comprehensive Review of Digital Twin -- Part 1: Modeling and Twinning Enabling Technologies
Adam Thelen
Xiaoge Zhang
Olga Fink
Yan Lu
Sayan Ghosh
B. Youn
Michael D. Todd
S. Mahadevan
Chao Hu
Zhen Hu
SyDaAI4CE
92
207
0
26 Aug 2022
Robustness to corruption in pre-trained Bayesian neural networks
Robustness to corruption in pre-trained Bayesian neural networks
Xi Wang
Laurence Aitchison
OODUQCV
54
5
0
24 Jun 2022
Gradient Descent on Neurons and its Link to Approximate Second-Order
  Optimization
Gradient Descent on Neurons and its Link to Approximate Second-Order Optimization
Frederik Benzing
ODL
127
26
0
28 Jan 2022
The Bayesian Learning Rule
The Bayesian Learning Rule
Mohammad Emtiyaz Khan
Håvard Rue
BDL
159
83
0
09 Jul 2021
Descending through a Crowded Valley - Benchmarking Deep Learning
  Optimizers
Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers
Robin M. Schmidt
Frank Schneider
Philipp Hennig
ODL
215
168
0
03 Jul 2020
On Empirical Comparisons of Optimizers for Deep Learning
On Empirical Comparisons of Optimizers for Deep Learning
Dami Choi
Christopher J. Shallue
Zachary Nado
Jaehoon Lee
Chris J. Maddison
George E. Dahl
126
259
0
11 Oct 2019
A Latent Variational Framework for Stochastic Optimization
A Latent Variational Framework for Stochastic Optimization
P. Casgrain
DRL
24
4
0
05 May 2019
1