ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.11955
  4. Cited By
On Exact Computation with an Infinitely Wide Neural Net

On Exact Computation with an Infinitely Wide Neural Net

26 April 2019
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
ArXivPDFHTML

Papers citing "On Exact Computation with an Infinitely Wide Neural Net"

50 / 227 papers shown
Title
Dataset Distillation with Convexified Implicit Gradients
Dataset Distillation with Convexified Implicit Gradients
Noel Loo
Ramin Hasani
Mathias Lechner
Daniela Rus
DD
31
41
0
13 Feb 2023
Global Convergence Rate of Deep Equilibrium Models with General Activations
Global Convergence Rate of Deep Equilibrium Models with General Activations
Lan V. Truong
39
2
0
11 Feb 2023
Understanding Reconstruction Attacks with the Neural Tangent Kernel and
  Dataset Distillation
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation
Noel Loo
Ramin Hasani
Mathias Lechner
Alexander Amini
Daniela Rus
DD
40
5
0
02 Feb 2023
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
François Caron
Fadhel Ayed
Paul Jung
Hoileong Lee
Juho Lee
Hongseok Yang
62
2
0
02 Feb 2023
A Simple Algorithm For Scaling Up Kernel Methods
A Simple Algorithm For Scaling Up Kernel Methods
Tengyu Xu
Bryan Kelly
Semyon Malamud
13
0
0
26 Jan 2023
ZiCo: Zero-shot NAS via Inverse Coefficient of Variation on Gradients
ZiCo: Zero-shot NAS via Inverse Coefficient of Variation on Gradients
Guihong Li
Yuedong Yang
Kartikeya Bhardwaj
R. Marculescu
36
60
0
26 Jan 2023
Catapult Dynamics and Phase Transitions in Quadratic Nets
Catapult Dynamics and Phase Transitions in Quadratic Nets
David Meltzer
Junyu Liu
27
9
0
18 Jan 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
53
121
0
17 Jan 2023
Graph Neural Networks are Inherently Good Generalizers: Insights by
  Bridging GNNs and MLPs
Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and MLPs
Chenxiao Yang
Qitian Wu
Jiahua Wang
Junchi Yan
AI4CE
19
51
0
18 Dec 2022
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of
  Backdoor Effects in Trojaned Machine Learning Models
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models
Rui Zhu
Di Tang
Siyuan Tang
Xiaofeng Wang
Haixu Tang
AAML
FedML
29
13
0
09 Dec 2022
Bypass Exponential Time Preprocessing: Fast Neural Network Training via
  Weight-Data Correlation Preprocessing
Bypass Exponential Time Preprocessing: Fast Neural Network Training via Weight-Data Correlation Preprocessing
Josh Alman
Jiehao Liang
Zhao-quan Song
Ruizhe Zhang
Danyang Zhuo
71
31
0
25 Nov 2022
Neural tangent kernel analysis of PINN for advection-diffusion equation
Neural tangent kernel analysis of PINN for advection-diffusion equation
M. Saadat
B. Gjorgiev
L. Das
G. Sansavini
25
0
0
21 Nov 2022
Characterizing the Spectrum of the NTK via a Power Series Expansion
Characterizing the Spectrum of the NTK via a Power Series Expansion
Michael Murray
Hui Jin
Benjamin Bowman
Guido Montúfar
35
11
0
15 Nov 2022
Do highly over-parameterized neural networks generalize since bad
  solutions are rare?
Do highly over-parameterized neural networks generalize since bad solutions are rare?
Julius Martinetz
T. Martinetz
22
1
0
07 Nov 2022
Evolution of Neural Tangent Kernels under Benign and Adversarial
  Training
Evolution of Neural Tangent Kernels under Benign and Adversarial Training
Noel Loo
Ramin Hasani
Alexander Amini
Daniela Rus
AAML
34
13
0
21 Oct 2022
Bayesian deep learning framework for uncertainty quantification in high
  dimensions
Bayesian deep learning framework for uncertainty quantification in high dimensions
Jeahan Jung
Minseok Choi
BDL
UQCV
13
1
0
21 Oct 2022
Global Convergence of SGD On Two Layer Neural Nets
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
23
5
0
20 Oct 2022
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
Nikolaos Tsilivis
Julia Kempe
AAML
39
17
0
11 Oct 2022
Efficient NTK using Dimensionality Reduction
Efficient NTK using Dimensionality Reduction
Nir Ailon
Supratim Shit
26
0
0
10 Oct 2022
FedMT: Federated Learning with Mixed-type Labels
FedMT: Federated Learning with Mixed-type Labels
Qiong Zhang
Jing Peng
Xin Zhang
A. Talhouk
Gang Niu
Xiaoxiao Li
FedML
51
0
0
05 Oct 2022
Minimalistic Unsupervised Learning with the Sparse Manifold Transform
Minimalistic Unsupervised Learning with the Sparse Manifold Transform
Yubei Chen
Zeyu Yun
Y. Ma
Bruno A. Olshausen
Yann LeCun
52
8
0
30 Sep 2022
Joint Embedding Self-Supervised Learning in the Kernel Regime
Joint Embedding Self-Supervised Learning in the Kernel Regime
B. Kiani
Randall Balestriero
Yubei Chen
S. Lloyd
Yann LeCun
SSL
43
13
0
29 Sep 2022
Lazy vs hasty: linearization in deep networks impacts learning schedule
  based on example difficulty
Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty
Thomas George
Guillaume Lajoie
A. Baratin
28
5
0
19 Sep 2022
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
52
6
0
17 Sep 2022
Generalization Properties of NAS under Activation and Skip Connection
  Search
Generalization Properties of NAS under Activation and Skip Connection Search
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
AI4CE
28
15
0
15 Sep 2022
Gradient Flows for L2 Support Vector Machine Training
Gradient Flows for L2 Support Vector Machine Training
Christian Bauckhage
H. Schneider
Benjamin Wulff
R. Sifa
19
0
0
08 Aug 2022
Towards Understanding Mixture of Experts in Deep Learning
Towards Understanding Mixture of Experts in Deep Learning
Zixiang Chen
Yihe Deng
Yue-bo Wu
Quanquan Gu
Yuan-Fang Li
MLT
MoE
27
53
0
04 Aug 2022
Can we achieve robustness from data alone?
Can we achieve robustness from data alone?
Nikolaos Tsilivis
Jingtong Su
Julia Kempe
OOD
DD
36
18
0
24 Jul 2022
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
Andrew M. Saxe
Shagun Sodhani
Sam Lewallen
AI4CE
28
34
0
21 Jul 2022
Graph Neural Network Bandits
Graph Neural Network Bandits
Parnian Kassraie
Andreas Krause
Ilija Bogunovic
26
11
0
13 Jul 2022
Informed Learning by Wide Neural Networks: Convergence, Generalization
  and Sampling Complexity
Informed Learning by Wide Neural Networks: Convergence, Generalization and Sampling Complexity
Jianyi Yang
Shaolei Ren
26
3
0
02 Jul 2022
Neural Networks can Learn Representations with Gradient Descent
Neural Networks can Learn Representations with Gradient Descent
Alexandru Damian
Jason D. Lee
Mahdi Soltanolkotabi
SSL
MLT
17
114
0
30 Jun 2022
Bounding the Width of Neural Networks via Coupled Initialization -- A
  Worst Case Analysis
Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Alexander Munteanu
Simon Omlor
Zhao-quan Song
David P. Woodruff
27
15
0
26 Jun 2022
Making Look-Ahead Active Learning Strategies Feasible with Neural
  Tangent Kernels
Making Look-Ahead Active Learning Strategies Feasible with Neural Tangent Kernels
Mohamad Amin Mohamadi
Wonho Bae
Danica J. Sutherland
30
20
0
25 Jun 2022
Fast Finite Width Neural Tangent Kernel
Fast Finite Width Neural Tangent Kernel
Roman Novak
Jascha Narain Sohl-Dickstein
S. Schoenholz
AAML
20
53
0
17 Jun 2022
Large-width asymptotics for ReLU neural networks with $α$-Stable
  initializations
Large-width asymptotics for ReLU neural networks with ααα-Stable initializations
Stefano Favaro
S. Fortini
Stefano Peluchetti
20
2
0
16 Jun 2022
Understanding the Generalization Benefit of Normalization Layers:
  Sharpness Reduction
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction
Kaifeng Lyu
Zhiyuan Li
Sanjeev Arora
FAtt
37
69
0
14 Jun 2022
Overcoming the Spectral Bias of Neural Value Approximation
Overcoming the Spectral Bias of Neural Value Approximation
Ge Yang
Anurag Ajay
Pulkit Agrawal
32
25
0
09 Jun 2022
Identifying good directions to escape the NTK regime and efficiently
  learn low-degree plus sparse polynomials
Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials
Eshaan Nichani
Yunzhi Bai
Jason D. Lee
27
10
0
08 Jun 2022
Infinite Recommendation Networks: A Data-Centric Approach
Infinite Recommendation Networks: A Data-Centric Approach
Noveen Sachdeva
Mehak Preet Dhaliwal
Carole-Jean Wu
Julian McAuley
DD
33
28
0
03 Jun 2022
Dual Convexified Convolutional Neural Networks
Dual Convexified Convolutional Neural Networks
Site Bai
Chuyang Ke
Jean Honorio
24
1
0
27 May 2022
Global Convergence of Over-parameterized Deep Equilibrium Models
Global Convergence of Over-parameterized Deep Equilibrium Models
Zenan Ling
Xingyu Xie
Qiuhao Wang
Zongpeng Zhang
Zhouchen Lin
32
12
0
27 May 2022
Analyzing Tree Architectures in Ensembles via Neural Tangent Kernel
Analyzing Tree Architectures in Ensembles via Neural Tangent Kernel
Ryuichi Kanoh
M. Sugiyama
23
2
0
25 May 2022
Empirical Phase Diagram for Three-layer Neural Networks with Infinite
  Width
Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width
Hanxu Zhou
Qixuan Zhou
Zhenyuan Jin
Tao Luo
Yaoyu Zhang
Zhi-Qin John Xu
25
20
0
24 May 2022
Transition to Linearity of General Neural Networks with Directed Acyclic
  Graph Architecture
Transition to Linearity of General Neural Networks with Directed Acyclic Graph Architecture
Libin Zhu
Chaoyue Liu
M. Belkin
GNN
AI4CE
20
4
0
24 May 2022
Gaussian Pre-Activations in Neural Networks: Myth or Reality?
Gaussian Pre-Activations in Neural Networks: Myth or Reality?
Pierre Wolinski
Julyan Arbel
AI4CE
68
8
0
24 May 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
C. Pehlevan
MLT
32
79
0
19 May 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step
  Improves the Representation
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
34
121
0
03 May 2022
NeuralEF: Deconstructing Kernels by Deep Neural Networks
NeuralEF: Deconstructing Kernels by Deep Neural Networks
Zhijie Deng
Jiaxin Shi
Jun Zhu
18
18
0
30 Apr 2022
Wide and Deep Neural Networks Achieve Optimality for Classification
Wide and Deep Neural Networks Achieve Optimality for Classification
Adityanarayanan Radhakrishnan
M. Belkin
Caroline Uhler
21
18
0
29 Apr 2022
Previous
12345
Next