ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.11955
  4. Cited By
On Exact Computation with an Infinitely Wide Neural Net

On Exact Computation with an Infinitely Wide Neural Net

26 April 2019
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
ArXivPDFHTML

Papers citing "On Exact Computation with an Infinitely Wide Neural Net"

50 / 225 papers shown
Title
Machine Learning and Deep Learning -- A review for Ecologists
Machine Learning and Deep Learning -- A review for Ecologists
Maximilian Pichler
F. Hartig
42
127
0
11 Apr 2022
On the Spectral Bias of Convolutional Neural Tangent and Gaussian
  Process Kernels
On the Spectral Bias of Convolutional Neural Tangent and Gaussian Process Kernels
Amnon Geifman
Meirav Galun
David Jacobs
Ronen Basri
30
13
0
17 Mar 2022
Global Convergence of MAML and Theory-Inspired Neural Architecture
  Search for Few-Shot Learning
Global Convergence of MAML and Theory-Inspired Neural Architecture Search for Few-Shot Learning
Haoxiang Wang
Yite Wang
Ruoyu Sun
Bo-wen Li
29
27
0
17 Mar 2022
Generalization Through The Lens Of Leave-One-Out Error
Generalization Through The Lens Of Leave-One-Out Error
Gregor Bachmann
Thomas Hofmann
Aurelien Lucchi
49
7
0
07 Mar 2022
Better Supervisory Signals by Observing Learning Paths
Better Supervisory Signals by Observing Learning Paths
Yi Ren
Shangmin Guo
Danica J. Sutherland
30
21
0
04 Mar 2022
Explicitising The Implicit Intrepretability of Deep Neural Networks Via
  Duality
Explicitising The Implicit Intrepretability of Deep Neural Networks Via Duality
Chandrashekar Lakshminarayanan
Ashutosh Kumar Singh
A. Rajkumar
AI4CE
18
1
0
01 Mar 2022
Sparse Neural Additive Model: Interpretable Deep Learning with Feature
  Selection via Group Sparsity
Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity
Shiyun Xu
Zhiqi Bu
Pratik Chaudhari
Ian J. Barnett
19
21
0
25 Feb 2022
Random Feature Amplification: Feature Learning and Generalization in
  Neural Networks
Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
30
29
0
15 Feb 2022
Finding Dynamics Preserving Adversarial Winning Tickets
Finding Dynamics Preserving Adversarial Winning Tickets
Xupeng Shi
Pengfei Zheng
Adam Ding
Yuan Gao
Weizhong Zhang
AAML
21
1
0
14 Feb 2022
A Geometric Understanding of Natural Gradient
A Geometric Understanding of Natural Gradient
Qinxun Bai
S. Rosenberg
Wei Xu
21
2
0
13 Feb 2022
Demystify Optimization and Generalization of Over-parameterized
  PAC-Bayesian Learning
Demystify Optimization and Generalization of Over-parameterized PAC-Bayesian Learning
Wei Huang
Chunrui Liu
Yilan Chen
Tianyu Liu
R. Xu
BDL
MLT
19
2
0
04 Feb 2022
Deep Layer-wise Networks Have Closed-Form Weights
Chieh-Tsai Wu
A. Masoomi
A. Gretton
Jennifer Dy
29
3
0
01 Feb 2022
Interplay between depth of neural networks and locality of target
  functions
Interplay between depth of neural networks and locality of target functions
Takashi Mori
Masakuni Ueda
19
0
0
28 Jan 2022
Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Jack Parker-Holder
Raghunandan Rajan
Xingyou Song
André Biedenkapp
Yingjie Miao
...
Vu-Linh Nguyen
Roberto Calandra
Aleksandra Faust
Frank Hutter
Marius Lindauer
AI4CE
33
100
0
11 Jan 2022
Separation of Scales and a Thermodynamic Description of Feature Learning
  in Some CNNs
Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs
Inbar Seroussi
Gadi Naveh
Z. Ringel
30
50
0
31 Dec 2021
Rethinking Influence Functions of Neural Networks in the
  Over-parameterized Regime
Rethinking Influence Functions of Neural Networks in the Over-parameterized Regime
Rui Zhang
Shihua Zhang
TDI
19
21
0
15 Dec 2021
Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural
  Networks
Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks
P. Esser
L. C. Vankadara
D. Ghoshdastidar
28
53
0
07 Dec 2021
A generalization gap estimation for overparameterized models via the
  Langevin functional variance
A generalization gap estimation for overparameterized models via the Langevin functional variance
Akifumi Okuno
Keisuke Yano
38
1
0
07 Dec 2021
Fast Graph Neural Tangent Kernel via Kronecker Sketching
Fast Graph Neural Tangent Kernel via Kronecker Sketching
Shunhua Jiang
Yunze Man
Zhao-quan Song
Zheng Yu
Danyang Zhuo
29
6
0
04 Dec 2021
A Structured Dictionary Perspective on Implicit Neural Representations
A Structured Dictionary Perspective on Implicit Neural Representations
Gizem Yüce
Guillermo Ortiz-Jiménez
Beril Besbinar
P. Frossard
31
86
0
03 Dec 2021
Forward Operator Estimation in Generative Models with Kernel Transfer
  Operators
Forward Operator Estimation in Generative Models with Kernel Transfer Operators
Z. Huang
Rudrasis Chakraborty
Vikas Singh
GAN
14
3
0
01 Dec 2021
Pixelated Butterfly: Simple and Efficient Sparse training for Neural
  Network Models
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Tri Dao
Beidi Chen
Kaizhao Liang
Jiaming Yang
Zhao-quan Song
Atri Rudra
Christopher Ré
33
75
0
30 Nov 2021
Embedding Principle: a hierarchical structure of loss landscape of deep
  neural networks
Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Yaoyu Zhang
Yuqing Li
Zhongwang Zhang
Tao Luo
Z. Xu
21
21
0
30 Nov 2021
Offline Neural Contextual Bandits: Pessimism, Optimization and
  Generalization
Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization
Thanh Nguyen-Tang
Sunil R. Gupta
A. Nguyen
Svetha Venkatesh
OffRL
24
28
0
27 Nov 2021
Learning with convolution and pooling operations in kernel methods
Learning with convolution and pooling operations in kernel methods
Theodor Misiakiewicz
Song Mei
MLT
15
29
0
16 Nov 2021
On the Equivalence between Neural Network and Support Vector Machine
On the Equivalence between Neural Network and Support Vector Machine
Yilan Chen
Wei Huang
Lam M. Nguyen
Tsui-Wei Weng
AAML
17
18
0
11 Nov 2021
A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
Ido Nachum
Jan Hkazla
Michael C. Gastpar
Anatoly Khina
33
0
0
03 Nov 2021
Neural Networks as Kernel Learners: The Silent Alignment Effect
Neural Networks as Kernel Learners: The Silent Alignment Effect
Alexander B. Atanasov
Blake Bordelon
C. Pehlevan
MLT
24
75
0
29 Oct 2021
Rethinking Neural vs. Matrix-Factorization Collaborative Filtering: the
  Theoretical Perspectives
Rethinking Neural vs. Matrix-Factorization Collaborative Filtering: the Theoretical Perspectives
Zida Cheng
Chuanwei Ruan
Siheng Chen
Sushant Kumar
Ya-Qin Zhang
22
16
0
23 Oct 2021
AIR-Net: Adaptive and Implicit Regularization Neural Network for Matrix
  Completion
AIR-Net: Adaptive and Implicit Regularization Neural Network for Matrix Completion
Zhemin Li
Tao Sun
Hongxia Wang
Bao Wang
42
6
0
12 Oct 2021
A global convergence theory for deep ReLU implicit networks via
  over-parameterization
A global convergence theory for deep ReLU implicit networks via over-parameterization
Tianxiang Gao
Hailiang Liu
Jia Liu
Hridesh Rajan
Hongyang Gao
MLT
23
16
0
11 Oct 2021
Does Preprocessing Help Training Over-parameterized Neural Networks?
Does Preprocessing Help Training Over-parameterized Neural Networks?
Zhao-quan Song
Shuo Yang
Ruizhe Zhang
35
49
0
09 Oct 2021
New Insights into Graph Convolutional Networks using Neural Tangent
  Kernels
New Insights into Graph Convolutional Networks using Neural Tangent Kernels
Mahalakshmi Sabanayagam
P. Esser
D. Ghoshdastidar
21
6
0
08 Oct 2021
EE-Net: Exploitation-Exploration Neural Networks in Contextual Bandits
EE-Net: Exploitation-Exploration Neural Networks in Contextual Bandits
Yikun Ban
Yuchen Yan
A. Banerjee
Jingrui He
OffRL
29
39
0
07 Oct 2021
On the Global Convergence of Gradient Descent for multi-layer ResNets in
  the mean-field regime
On the Global Convergence of Gradient Descent for multi-layer ResNets in the mean-field regime
Zhiyan Ding
Shi Chen
Qin Li
S. Wright
MLT
AI4CE
30
11
0
06 Oct 2021
Exponentially Many Local Minima in Quantum Neural Networks
Exponentially Many Local Minima in Quantum Neural Networks
Xuchen You
Xiaodi Wu
72
51
0
06 Oct 2021
Data Summarization via Bilevel Optimization
Data Summarization via Bilevel Optimization
Zalan Borsos
Mojmír Mutný
Marco Tagliasacchi
Andreas Krause
30
8
0
26 Sep 2021
Fast and Sample-Efficient Interatomic Neural Network Potentials for
  Molecules and Materials Based on Gaussian Moments
Fast and Sample-Efficient Interatomic Neural Network Potentials for Molecules and Materials Based on Gaussian Moments
Viktor Zaverkin
David Holzmüller
Ingo Steinwart
Johannes Kastner
21
19
0
20 Sep 2021
Understanding the Generalization of Adam in Learning Neural Networks
  with Proper Regularization
Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization
Difan Zou
Yuan Cao
Yuanzhi Li
Quanquan Gu
MLT
AI4CE
44
38
0
25 Aug 2021
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite
  Width Neural Networks
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite Width Neural Networks
Adityanarayanan Radhakrishnan
George Stefanakis
M. Belkin
Caroline Uhler
30
25
0
31 Jul 2021
Batch Active Learning at Scale
Batch Active Learning at Scale
Gui Citovsky
Giulia DeSalvo
Claudio Gentile
Lazaros Karydas
Anand Rajagopalan
Afshin Rostamizadeh
Sanjiv Kumar
15
150
0
29 Jul 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
35
229
0
27 Jul 2021
The Values Encoded in Machine Learning Research
The Values Encoded in Machine Learning Research
Abeba Birhane
Pratyusha Kalluri
Dallas Card
William Agnew
Ravit Dotan
Michelle Bao
25
274
0
29 Jun 2021
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient
  Training and Effective Adaptation
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation
Haoxiang Wang
Han Zhao
Bo-wen Li
37
88
0
16 Jun 2021
Locality defeats the curse of dimensionality in convolutional
  teacher-student scenarios
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero
Francesco Cagnetta
M. Wyart
30
31
0
16 Jun 2021
How to Train Your Wide Neural Network Without Backprop: An Input-Weight
  Alignment Perspective
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
Akhilan Boopathy
Ila Fiete
16
9
0
15 Jun 2021
What can linearized neural networks actually say about generalization?
What can linearized neural networks actually say about generalization?
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
29
43
0
12 Jun 2021
Precise characterization of the prior predictive distribution of deep
  ReLU networks
Precise characterization of the prior predictive distribution of deep ReLU networks
Lorenzo Noci
Gregor Bachmann
Kevin Roth
Sebastian Nowozin
Thomas Hofmann
BDL
UQCV
23
32
0
11 Jun 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian
  Process Perspective
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
Geoff Pleiss
John P. Cunningham
28
24
0
11 Jun 2021
A Neural Tangent Kernel Perspective of GANs
A Neural Tangent Kernel Perspective of GANs
Jean-Yves Franceschi
Emmanuel de Bézenac
Ibrahim Ayed
Mickaël Chen
Sylvain Lamprier
Patrick Gallinari
31
26
0
10 Jun 2021
Previous
12345
Next