Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.05148
Cited By
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes
11 October 2018
Roman Novak
Lechao Xiao
Jaehoon Lee
Yasaman Bahri
Greg Yang
Jiri Hron
Daniel A. Abolafia
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCV
BDL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes"
50 / 228 papers shown
Title
Deep Stable neural networks: large-width asymptotics and convergence rates
Stefano Favaro
S. Fortini
Stefano Peluchetti
BDL
28
14
0
02 Aug 2021
Deep Networks Provably Classify Data on Curves
Tingran Wang
Sam Buchanan
D. Gilboa
John N. Wright
23
9
0
29 Jul 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
35
229
0
27 Jul 2021
A variational approximate posterior for the deep Wishart process
Sebastian W. Ober
Laurence Aitchison
BDL
17
11
0
21 Jul 2021
Understanding the Distributions of Aggregation Layers in Deep Neural Networks
Eng-Jon Ong
S. Husain
M. Bober
FAtt
FedML
AI4CE
11
2
0
09 Jul 2021
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Boris Hanin
BDL
24
43
0
04 Jul 2021
Scale Mixtures of Neural Network Gaussian Processes
Hyungi Lee
Eunggu Yun
Hongseok Yang
Juho Lee
UQCV
BDL
13
7
0
03 Jul 2021
Subspace Clustering Based Analysis of Neural Networks
Uday Singh Saini
Pravallika Devineni
Evangelos E. Papalexakis
GNN
14
1
0
02 Jul 2021
Implicit Acceleration and Feature Learning in Infinitely Wide Neural Networks with Bottlenecks
Etai Littwin
Omid Saremi
Shuangfei Zhai
Vimal Thilak
Hanlin Goh
J. Susskind
Greg Yang
25
3
0
01 Jul 2021
α
α
α
-Stable convergence of heavy-tailed infinitely-wide neural networks
Paul Jung
Hoileong Lee
Jiho Lee
Hongseok Yang
8
5
0
18 Jun 2021
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation
Haoxiang Wang
Han Zhao
Bo-wen Li
37
88
0
16 Jun 2021
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero
Francesco Cagnetta
M. Wyart
27
31
0
16 Jun 2021
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
Akhilan Boopathy
Ila Fiete
16
9
0
15 Jun 2021
Scaling Neural Tangent Kernels via Sketching and Random Features
A. Zandieh
Insu Han
H. Avron
N. Shoham
Chaewon Kim
Jinwoo Shin
11
31
0
15 Jun 2021
Precise characterization of the prior predictive distribution of deep ReLU networks
Lorenzo Noci
Gregor Bachmann
Kevin Roth
Sebastian Nowozin
Thomas Hofmann
BDL
UQCV
21
32
0
11 Jun 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
Geoff Pleiss
John P. Cunningham
28
24
0
11 Jun 2021
A self consistent theory of Gaussian Processes captures feature learning effects in finite CNNs
Gadi Naveh
Z. Ringel
SSL
MLT
30
31
0
08 Jun 2021
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Mufan Bill Li
Mihai Nica
Daniel M. Roy
28
33
0
07 Jun 2021
Reverse Engineering the Neural Tangent Kernel
James B. Simon
Sajant Anand
M. DeWeese
27
9
0
06 Jun 2021
Asymptotics of representation learning in finite Bayesian neural networks
Jacob A. Zavatone-Veth
Abdulkadir Canatar
Benjamin S. Ruben
C. Pehlevan
21
31
0
01 Jun 2021
Priors in Bayesian Deep Learning: A Review
Vincent Fortuin
UQCV
BDL
29
124
0
14 May 2021
Deep Neural Networks as Point Estimates for Deep Gaussian Processes
Vincent Dutordoir
J. Hensman
Mark van der Wilk
Carl Henrik Ek
Zoubin Ghahramani
N. Durrande
BDL
UQCV
10
30
0
10 May 2021
Tensor Programs IIb: Architectural Universality of Neural Tangent Kernel Training Dynamics
Greg Yang
Etai Littwin
9
64
0
08 May 2021
ResMLP: Feedforward networks for image classification with data-efficient training
Hugo Touvron
Piotr Bojanowski
Mathilde Caron
Matthieu Cord
Alaaeldin El-Nouby
...
Gautier Izacard
Armand Joulin
Gabriel Synnaeve
Jakob Verbeek
Hervé Jégou
VLM
21
655
0
07 May 2021
What Are Bayesian Neural Network Posteriors Really Like?
Pavel Izmailov
Sharad Vikram
Matthew D. Hoffman
A. Wilson
UQCV
BDL
17
366
0
29 Apr 2021
On the validity of kernel approximations for orthogonally-initialized neural networks
James Martens
8
3
0
13 Apr 2021
How rotational invariance of common kernels prevents generalization in high dimensions
Konstantin Donhauser
Mingqi Wu
Fanny Yang
15
22
0
09 Apr 2021
Learning with Neural Tangent Kernels in Near Input Sparsity Time
A. Zandieh
4
0
0
01 Apr 2021
A Temporal Kernel Approach for Deep Learning with Continuous-time Information
Da Xu
Chuanwei Ruan
Evren Körpeoglu
Sushant Kumar
Kannan Achan
SyDa
AI4TS
14
4
0
28 Mar 2021
Weighted Neural Tangent Kernel: A Generalized and Improved Network-Induced Kernel
Lei Tan
Shutong Wu
Xiaolin Huang
21
1
0
22 Mar 2021
Why flatness does and does not correlate with generalization for deep neural networks
Shuo Zhang
Isaac Reid
Guillermo Valle Pérez
A. Louis
11
8
0
10 Mar 2021
Approximation and Learning with Deep Convolutional Models: a Kernel Perspective
A. Bietti
29
29
0
19 Feb 2021
Quantum field-theoretic machine learning
Dimitrios Bachtis
Gert Aarts
B. Lucini
AI4CE
19
28
0
18 Feb 2021
Non-asymptotic approximations of neural networks by Gaussian processes
Ronen Eldan
Dan Mikulincer
T. Schramm
33
24
0
17 Feb 2021
Double-descent curves in neural networks: a new perspective using Gaussian processes
Ouns El Harzli
Bernardo Cuenca Grau
Guillermo Valle Pérez
A. Louis
15
6
0
14 Feb 2021
Explaining Neural Scaling Laws
Yasaman Bahri
Ethan Dyer
Jared Kaplan
Jaehoon Lee
Utkarsh Sharma
27
250
0
12 Feb 2021
Bayesian Neural Network Priors Revisited
Vincent Fortuin
Adrià Garriga-Alonso
Sebastian W. Ober
F. Wenzel
Gunnar Rätsch
Richard E. Turner
Mark van der Wilk
Laurence Aitchison
BDL
UQCV
64
137
0
12 Feb 2021
Meta-Learning with Neural Tangent Kernels
Yufan Zhou
Zhenyi Wang
Jiayi Xian
Changyou Chen
Jinhui Xu
14
20
0
07 Feb 2021
Faster Kernel Interpolation for Gaussian Processes
Mohit Yadav
Daniel Sheldon
Cameron Musco
BDL
13
10
0
28 Jan 2021
Implicit Bias of Linear RNNs
M Motavali Emami
Mojtaba Sahraee-Ardakan
Parthe Pandit
S. Rangan
A. Fletcher
15
11
0
19 Jan 2021
Correlated Weights in Infinite Limits of Deep Convolutional Neural Networks
Adrià Garriga-Alonso
Mark van der Wilk
12
4
0
11 Jan 2021
Infinitely Wide Tensor Networks as Gaussian Process
Erdong Guo
D. Draper
14
2
0
07 Jan 2021
Perspective: A Phase Diagram for Deep Learning unifying Jamming, Feature Learning and Lazy Training
Mario Geiger
Leonardo Petrini
M. Wyart
DRL
23
11
0
30 Dec 2020
Enhanced Recurrent Neural Tangent Kernels for Non-Time-Series Data
Sina Alemohammad
Randall Balestriero
Zichao Wang
Richard Baraniuk
AI4TS
11
1
0
09 Dec 2020
Analyzing Finite Neural Networks: Can We Trust Neural Tangent Kernel Theory?
Mariia Seleznova
Gitta Kutyniok
AAML
16
29
0
08 Dec 2020
Generalization bounds for deep learning
Guillermo Valle Pérez
A. Louis
BDL
13
44
0
07 Dec 2020
Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Kernel Renormalization
Qianyi Li
H. Sompolinsky
16
69
0
07 Dec 2020
Towards NNGP-guided Neural Architecture Search
Daniel S. Park
Jaehoon Lee
Daiyi Peng
Yuan Cao
Jascha Narain Sohl-Dickstein
BDL
18
32
0
11 Nov 2020
Dataset Meta-Learning from Kernel Ridge-Regression
Timothy Nguyen
Zhourung Chen
Jaehoon Lee
DD
36
238
0
30 Oct 2020
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
Thao Nguyen
M. Raghu
Simon Kornblith
OOD
8
262
0
29 Oct 2020
Previous
1
2
3
4
5
Next