ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.11955
  4. Cited By
On Exact Computation with an Infinitely Wide Neural Net

On Exact Computation with an Infinitely Wide Neural Net

26 April 2019
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
ArXivPDFHTML

Papers citing "On Exact Computation with an Infinitely Wide Neural Net"

50 / 227 papers shown
Title
A Neural Tangent Kernel Perspective of GANs
A Neural Tangent Kernel Perspective of GANs
Jean-Yves Franceschi
Emmanuel de Bézenac
Ibrahim Ayed
Mickaël Chen
Sylvain Lamprier
Patrick Gallinari
31
26
0
10 Jun 2021
A self consistent theory of Gaussian Processes captures feature learning
  effects in finite CNNs
A self consistent theory of Gaussian Processes captures feature learning effects in finite CNNs
Gadi Naveh
Z. Ringel
SSL
MLT
33
31
0
08 Jun 2021
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width
  Limit at Initialization
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Mufan Bill Li
Mihai Nica
Daniel M. Roy
28
33
0
07 Jun 2021
Neural Active Learning with Performance Guarantees
Neural Active Learning with Performance Guarantees
Pranjal Awasthi
Christoph Dann
Claudio Gentile
Ayush Sekhari
Zhilei Wang
24
22
0
06 Jun 2021
Fundamental tradeoffs between memorization and robustness in random
  features and neural tangent regimes
Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes
Elvis Dohmatob
25
9
0
04 Jun 2021
FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning
  Convergence Analysis
FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Convergence Analysis
Baihe Huang
Xiaoxiao Li
Zhao-quan Song
Xin Yang
FedML
23
16
0
11 May 2021
Generalization Guarantees for Neural Architecture Search with
  Train-Validation Split
Generalization Guarantees for Neural Architecture Search with Train-Validation Split
Samet Oymak
Mingchen Li
Mahdi Soltanolkotabi
AI4CE
OOD
36
13
0
29 Apr 2021
Unsupervised Shape Completion via Deep Prior in the Neural Tangent
  Kernel Perspective
Unsupervised Shape Completion via Deep Prior in the Neural Tangent Kernel Perspective
Lei Chu
Hao Pan
Wenping Wang
3DPC
32
11
0
19 Apr 2021
A Neural Pre-Conditioning Active Learning Algorithm to Reduce Label
  Complexity
A Neural Pre-Conditioning Active Learning Algorithm to Reduce Label Complexity
Seo Taek Kong
Soomin Jeon
Dongbin Na
Jaewon Lee
Honglak Lee
Kyu-Hwan Jung
20
6
0
08 Apr 2021
Cycle Self-Training for Domain Adaptation
Cycle Self-Training for Domain Adaptation
Hong Liu
Jianmin Wang
Mingsheng Long
33
174
0
05 Mar 2021
Fast Adaptation with Linearized Neural Networks
Fast Adaptation with Linearized Neural Networks
Wesley J. Maddox
Shuai Tang
Pablo G. Moreno
A. Wilson
Andreas C. Damianou
29
32
0
02 Mar 2021
Experiments with Rich Regime Training for Deep Learning
Experiments with Rich Regime Training for Deep Learning
Xinyan Li
A. Banerjee
29
2
0
26 Feb 2021
Learning with invariances in random features and kernel models
Learning with invariances in random features and kernel models
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
46
89
0
25 Feb 2021
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer
  Neural Network
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Mo Zhou
Rong Ge
Chi Jin
74
44
0
04 Feb 2021
On the eigenvector bias of Fourier feature networks: From regression to
  solving multi-scale PDEs with physics-informed neural networks
On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks
Sifan Wang
Hanwen Wang
P. Perdikaris
131
438
0
18 Dec 2020
Towards Understanding Ensemble, Knowledge Distillation and
  Self-Distillation in Deep Learning
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
38
355
0
17 Dec 2020
The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous
  Neural Networks
The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous Neural Networks
Bohan Wang
Qi Meng
Wei Chen
Tie-Yan Liu
22
33
0
11 Dec 2020
Gradient Starvation: A Learning Proclivity in Neural Networks
Gradient Starvation: A Learning Proclivity in Neural Networks
Mohammad Pezeshki
Sekouba Kaba
Yoshua Bengio
Aaron Courville
Doina Precup
Guillaume Lajoie
MLT
47
257
0
18 Nov 2020
Power of data in quantum machine learning
Power of data in quantum machine learning
Hsin-Yuan Huang
Michael Broughton
Masoud Mohseni
Ryan Babbush
Sergio Boixo
Hartmut Neven
Jarrod R. McClean
23
621
0
03 Nov 2020
Dataset Meta-Learning from Kernel Ridge-Regression
Dataset Meta-Learning from Kernel Ridge-Regression
Timothy Nguyen
Zhourung Chen
Jaehoon Lee
DD
36
238
0
30 Oct 2020
Are wider nets better given the same number of parameters?
Are wider nets better given the same number of parameters?
A. Golubeva
Behnam Neyshabur
Guy Gur-Ari
21
44
0
27 Oct 2020
Stable ResNet
Stable ResNet
Soufiane Hayou
Eugenio Clerico
Bo He
George Deligiannidis
Arnaud Doucet
Judith Rousseau
ODL
SSeg
46
51
0
24 Oct 2020
CoinDICE: Off-Policy Confidence Interval Estimation
CoinDICE: Off-Policy Confidence Interval Estimation
Bo Dai
Ofir Nachum
Yinlam Chow
Lihong Li
Csaba Szepesvári
Dale Schuurmans
OffRL
27
84
0
22 Oct 2020
A Theoretical Analysis of Catastrophic Forgetting through the NTK
  Overlap Matrix
A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix
T. Doan
Mehdi Abbana Bennani
Bogdan Mazoure
Guillaume Rabusseau
Pierre Alquier
CLL
15
80
0
07 Oct 2020
Understanding Self-supervised Learning with Dual Deep Networks
Understanding Self-supervised Learning with Dual Deep Networks
Yuandong Tian
Lantao Yu
Xinlei Chen
Surya Ganguli
SSL
13
78
0
01 Oct 2020
Deep Equals Shallow for ReLU Networks in Kernel Regimes
Deep Equals Shallow for ReLU Networks in Kernel Regimes
A. Bietti
Francis R. Bach
28
86
0
30 Sep 2020
Improved generalization by noise enhancement
Improved generalization by noise enhancement
Takashi Mori
Masahito Ueda
8
3
0
28 Sep 2020
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS
Lin Chen
Sheng Xu
30
93
0
22 Sep 2020
Generalized Leverage Score Sampling for Neural Networks
Generalized Leverage Score Sampling for Neural Networks
J. Lee
Ruoqi Shen
Zhao-quan Song
Mengdi Wang
Zheng Yu
21
42
0
21 Sep 2020
Deep Networks and the Multiple Manifold Problem
Deep Networks and the Multiple Manifold Problem
Sam Buchanan
D. Gilboa
John N. Wright
166
39
0
25 Aug 2020
Obtaining Adjustable Regularization for Free via Iterate Averaging
Obtaining Adjustable Regularization for Free via Iterate Averaging
Jingfeng Wu
Vladimir Braverman
Lin F. Yang
30
2
0
15 Aug 2020
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
20
61
0
03 Aug 2020
When and why PINNs fail to train: A neural tangent kernel perspective
When and why PINNs fail to train: A neural tangent kernel perspective
Sifan Wang
Xinling Yu
P. Perdikaris
33
877
0
28 Jul 2020
Explicit Regularisation in Gaussian Noise Injections
Explicit Regularisation in Gaussian Noise Injections
A. Camuto
M. Willetts
Umut Simsekli
Stephen J. Roberts
Chris Holmes
23
55
0
14 Jul 2020
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural
  Network Initialization?
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization?
Yaniv Blumenfeld
D. Gilboa
Daniel Soudry
ODL
22
13
0
02 Jul 2020
Associative Memory in Iterated Overparameterized Sigmoid Autoencoders
Associative Memory in Iterated Overparameterized Sigmoid Autoencoders
Yibo Jiang
C. Pehlevan
19
13
0
30 Jun 2020
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Greg Yang
48
134
0
25 Jun 2020
An analytic theory of shallow networks dynamics for hinge loss
  classification
An analytic theory of shallow networks dynamics for hinge loss classification
Franco Pellegrini
Giulio Biroli
32
19
0
19 Jun 2020
Kernel Alignment Risk Estimator: Risk Prediction from Training Data
Kernel Alignment Risk Estimator: Risk Prediction from Training Data
Arthur Jacot
Berfin cSimcsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
22
67
0
17 Jun 2020
On the training dynamics of deep networks with $L_2$ regularization
On the training dynamics of deep networks with L2L_2L2​ regularization
Aitor Lewkowycz
Guy Gur-Ari
36
53
0
15 Jun 2020
Can Temporal-Difference and Q-Learning Learn Representation? A
  Mean-Field Theory
Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory
Yufeng Zhang
Qi Cai
Zhuoran Yang
Yongxin Chen
Zhaoran Wang
OOD
MLT
81
11
0
08 Jun 2020
Coresets via Bilevel Optimization for Continual Learning and Streaming
Coresets via Bilevel Optimization for Continual Learning and Streaming
Zalan Borsos
Mojmír Mutný
Andreas Krause
CLL
38
226
0
06 Jun 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for
  linear-width neural networks
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
44
71
0
25 May 2020
Feature Purification: How Adversarial Training Performs Robust Deep
  Learning
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
27
147
0
20 May 2020
Modularizing Deep Learning via Pairwise Learning With Kernels
Modularizing Deep Learning via Pairwise Learning With Kernels
Shiyu Duan
Shujian Yu
José C. Príncipe
MoMe
25
20
0
12 May 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
39
172
0
23 Apr 2020
Predicting the outputs of finite deep neural networks trained with noisy
  gradients
Predicting the outputs of finite deep neural networks trained with noisy gradients
Gadi Naveh
Oded Ben-David
H. Sompolinsky
Z. Ringel
11
20
0
02 Apr 2020
A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable
  Optimization Via Overparameterization From Depth
A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable Optimization Via Overparameterization From Depth
Yiping Lu
Chao Ma
Yulong Lu
Jianfeng Lu
Lexing Ying
MLT
36
78
0
11 Mar 2020
Frequency Bias in Neural Networks for Input of Non-Uniform Density
Frequency Bias in Neural Networks for Input of Non-Uniform Density
Ronen Basri
Meirav Galun
Amnon Geifman
David Jacobs
Yoni Kasten
S. Kritchman
34
182
0
10 Mar 2020
The large learning rate phase of deep learning: the catapult mechanism
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
159
234
0
04 Mar 2020
Previous
12345
Next