ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.08416
  4. Cited By
Benchmarking the Neural Linear Model for Regression

Benchmarking the Neural Linear Model for Regression

18 December 2019
Sebastian W. Ober
C. Rasmussen
    BDL
ArXivPDFHTML

Papers citing "Benchmarking the Neural Linear Model for Regression"

9 / 9 papers shown
Title
Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
  Estimation for Latent Gaussian Models
Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood Estimation for Latent Gaussian Models
Alexander Lin
Bahareh Tolooshams
Yves Atchadé
Demba E. Ba
36
1
0
05 Jun 2023
Do Bayesian Neural Networks Need To Be Fully Stochastic?
Do Bayesian Neural Networks Need To Be Fully Stochastic?
Mrinank Sharma
Sebastian Farquhar
Eric T. Nalisnick
Tom Rainforth
BDL
23
52
0
11 Nov 2022
Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks
  with Sparse Gaussian Processes
Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian Processes
Jongseo Lee
Jianxiang Feng
Matthias Humt
M. Müller
Rudolph Triebel
UQCV
48
21
0
20 Sep 2021
Bayesian Deep Basis Fitting for Depth Completion with Uncertainty
Bayesian Deep Basis Fitting for Depth Completion with Uncertainty
Chao Qu
Wenxin Liu
Camillo J Taylor
UQCV
BDL
22
31
0
29 Mar 2021
The Promises and Pitfalls of Deep Kernel Learning
The Promises and Pitfalls of Deep Kernel Learning
Sebastian W. Ober
C. Rasmussen
Mark van der Wilk
UQCV
BDL
21
107
0
24 Feb 2021
Uncertainty-Aware (UNA) Bases for Deep Bayesian Regression Using
  Multi-Headed Auxiliary Networks
Uncertainty-Aware (UNA) Bases for Deep Bayesian Regression Using Multi-Headed Auxiliary Networks
Sujay Thakur
Cooper Lorsung
Yaniv Yacoby
Finale Doshi-Velez
Weiwei Pan
BDL
UQCV
25
4
0
21 Jun 2020
Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks
Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks
Agustinus Kristiadi
Matthias Hein
Philipp Hennig
BDL
UQCV
33
277
0
24 Feb 2020
Marginally-calibrated deep distributional regression
Marginally-calibrated deep distributional regression
Nadja Klein
David J. Nott
M. Smith
UQCV
32
14
0
26 Aug 2019
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,138
0
06 Jun 2015
1