ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.06529
  4. Cited By
The Limitations of Large Width in Neural Networks: A Deep Gaussian
  Process Perspective

The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective

11 June 2021
Geoff Pleiss
John P. Cunningham
ArXivPDFHTML

Papers citing "The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective"

19 / 19 papers shown
Title
Theoretical Limitations of Ensembles in the Age of Overparameterization
Theoretical Limitations of Ensembles in the Age of Overparameterization
Niclas Dern
John P. Cunningham
Geoff Pleiss
BDL
UQCV
29
0
0
21 Oct 2024
A Unified Kernel for Neural Network Learning
A Unified Kernel for Neural Network Learning
Shao-Qun Zhang
Zong-Yi Chen
Yong-Ming Tian
Xun Lu
29
1
0
26 Mar 2024
On permutation symmetries in Bayesian neural network posteriors: a
  variational perspective
On permutation symmetries in Bayesian neural network posteriors: a variational perspective
Simone Rossi
Ankit Singh
T. Hannagan
24
2
0
16 Oct 2023
On the Disconnect Between Theory and Practice of Neural Networks: Limits
  of the NTK Perspective
On the Disconnect Between Theory and Practice of Neural Networks: Limits of the NTK Perspective
Jonathan Wenger
Felix Dangel
Agustinus Kristiadi
25
0
0
29 Sep 2023
Convolutional Deep Kernel Machines
Convolutional Deep Kernel Machines
Edward Milsom
Ben Anson
Laurence Aitchison
BDL
18
5
0
18 Sep 2023
Bayesian inference with finitely wide neural networks
Bayesian inference with finitely wide neural networks
Chi-Ken Lu
BDL
24
0
0
06 Mar 2023
Gaussian Process-Gated Hierarchical Mixtures of Experts
Gaussian Process-Gated Hierarchical Mixtures of Experts
Yuhao Liu
Marzieh Ajirak
P. Djuric
MoE
16
1
0
09 Feb 2023
Bayesian Interpolation with Deep Linear Networks
Bayesian Interpolation with Deep Linear Networks
Boris Hanin
Alexander Zlokapa
34
25
0
29 Dec 2022
An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width
  Bayesian Neural Networks
An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width Bayesian Neural Networks
Jiayu Yao
Yaniv Yacoby
Beau Coker
Weiwei Pan
Finale Doshi-Velez
19
1
0
16 Nov 2022
Variational Inference for Infinitely Deep Neural Networks
Variational Inference for Infinitely Deep Neural Networks
Achille Nazaret
David M. Blei
BDL
17
11
0
21 Sep 2022
On Connecting Deep Trigonometric Networks with Deep Gaussian Processes:
  Covariance, Expressivity, and Neural Tangent Kernel
On Connecting Deep Trigonometric Networks with Deep Gaussian Processes: Covariance, Expressivity, and Neural Tangent Kernel
Chi-Ken Lu
Patrick Shafto
BDL
26
0
0
14 Mar 2022
Complexity from Adaptive-Symmetries Breaking: Global Minima in the
  Statistical Mechanics of Deep Neural Networks
Complexity from Adaptive-Symmetries Breaking: Global Minima in the Statistical Mechanics of Deep Neural Networks
Shaun Li
AI4CE
22
0
0
03 Jan 2022
Dependence between Bayesian neural network units
Dependence between Bayesian neural network units
M. Vladimirova
Julyan Arbel
Stéphane Girard
BDL
12
3
0
29 Nov 2021
Depth induces scale-averaging in overparameterized linear Bayesian
  neural networks
Depth induces scale-averaging in overparameterized linear Bayesian neural networks
Jacob A. Zavatone-Veth
C. Pehlevan
BDL
UQCV
MDE
36
8
0
23 Nov 2021
Conditional Deep Gaussian Processes: empirical Bayes hyperdata learning
Conditional Deep Gaussian Processes: empirical Bayes hyperdata learning
Chi-Ken Lu
Patrick Shafto
BDL
14
4
0
01 Oct 2021
A theory of representation learning gives a deep generalisation of
  kernel methods
A theory of representation learning gives a deep generalisation of kernel methods
Adam X. Yang
Maxime Robeyns
Edward Milsom
Ben Anson
Nandi Schoots
Laurence Aitchison
BDL
24
10
0
30 Aug 2021
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
173
51
0
17 Oct 2019
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
123
602
0
14 Feb 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
247
9,134
0
06 Jun 2015
1