ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.01663
  4. Cited By
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks

Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks

3 October 2019
Sanjeev Arora
S. Du
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
Dingli Yu
    AAML
ArXivPDFHTML

Papers citing "Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks"

40 / 40 papers shown
Title
Unsupervised Replay Strategies for Continual Learning with Limited Data
Unsupervised Replay Strategies for Continual Learning with Limited Data
Anthony Bazhenov
Pahan Dewasurendra
G. Krishnan
Jean Erik Delanois
CLL
24
0
0
21 Oct 2024
Neural Lineage
Neural Lineage
Runpeng Yu
Xinchao Wang
26
4
0
17 Jun 2024
Deep Continuous Networks
Deep Continuous Networks
Nergis Tomen
S. Pintea
J. C. V. Gemert
92
14
0
02 Feb 2024
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained
  Models
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models
Guillermo Ortiz-Jiménez
Alessandro Favero
P. Frossard
MoMe
42
106
0
22 May 2023
Cut your Losses with Squentropy
Cut your Losses with Squentropy
Like Hui
M. Belkin
S. Wright
UQCV
13
8
0
08 Feb 2023
A Simple Algorithm For Scaling Up Kernel Methods
A Simple Algorithm For Scaling Up Kernel Methods
Tengyu Xu
Bryan T. Kelly
Semyon Malamud
11
0
0
26 Jan 2023
Global Convergence of SGD On Two Layer Neural Nets
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
23
5
0
20 Oct 2022
Why Quantization Improves Generalization: NTK of Binary Weight Neural
  Networks
Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks
Kaiqi Zhang
Ming Yin
Yu-Xiang Wang
MQ
16
4
0
13 Jun 2022
Infinite Recommendation Networks: A Data-Centric Approach
Infinite Recommendation Networks: A Data-Centric Approach
Noveen Sachdeva
Mehak Preet Dhaliwal
Carole-Jean Wu
Julian McAuley
DD
31
28
0
03 Jun 2022
Analyzing Tree Architectures in Ensembles via Neural Tangent Kernel
Analyzing Tree Architectures in Ensembles via Neural Tangent Kernel
Ryuichi Kanoh
M. Sugiyama
23
2
0
25 May 2022
Wide and Deep Neural Networks Achieve Optimality for Classification
Wide and Deep Neural Networks Achieve Optimality for Classification
Adityanarayanan Radhakrishnan
M. Belkin
Caroline Uhler
16
18
0
29 Apr 2022
Multi-model Ensemble Analysis with Neural Network Gaussian Processes
Multi-model Ensemble Analysis with Neural Network Gaussian Processes
Trevor Harris
B. Li
Ryan Sriver
22
5
0
08 Feb 2022
Deep Layer-wise Networks Have Closed-Form Weights
Chieh-Tsai Wu
A. Masoomi
A. Gretton
Jennifer Dy
29
3
0
01 Feb 2022
Forward Operator Estimation in Generative Models with Kernel Transfer
  Operators
Forward Operator Estimation in Generative Models with Kernel Transfer Operators
Z. Huang
Rudrasis Chakraborty
Vikas Singh
GAN
14
3
0
01 Dec 2021
On the Effectiveness of Neural Ensembles for Image Classification with
  Small Datasets
On the Effectiveness of Neural Ensembles for Image Classification with Small Datasets
Lorenzo Brigato
Luca Iocchi
UQCV
22
0
0
29 Nov 2021
On the Equivalence between Neural Network and Support Vector Machine
On the Equivalence between Neural Network and Support Vector Machine
Yilan Chen
Wei Huang
Lam M. Nguyen
Tsui-Wei Weng
AAML
17
18
0
11 Nov 2021
Subquadratic Overparameterization for Shallow Neural Networks
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
V. Cevher
24
32
0
02 Nov 2021
VC dimension of partially quantized neural networks in the
  overparametrized regime
VC dimension of partially quantized neural networks in the overparametrized regime
Yutong Wang
Clayton D. Scott
14
1
0
06 Oct 2021
How Powerful is Graph Convolution for Recommendation?
How Powerful is Graph Convolution for Recommendation?
Yifei Shen
Yongji Wu
Yao Zhang
Caihua Shan
Jun Zhang
Khaled B. Letaief
Dongsheng Li
GNN
28
99
0
17 Aug 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
32
229
0
27 Jul 2021
How to Train Your Wide Neural Network Without Backprop: An Input-Weight
  Alignment Perspective
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
Akhilan Boopathy
Ila Fiete
16
9
0
15 Jun 2021
What can linearized neural networks actually say about generalization?
What can linearized neural networks actually say about generalization?
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
26
43
0
12 Jun 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian
  Process Perspective
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
Geoff Pleiss
John P. Cunningham
26
24
0
11 Jun 2021
A Neural Tangent Kernel Perspective of GANs
A Neural Tangent Kernel Perspective of GANs
Jean-Yves Franceschi
Emmanuel de Bézenac
Ibrahim Ayed
Mickaël Chen
Sylvain Lamprier
Patrick Gallinari
31
26
0
10 Jun 2021
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width
  Limit at Initialization
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Mufan Bill Li
Mihai Nica
Daniel M. Roy
23
33
0
07 Jun 2021
Priors in Bayesian Deep Learning: A Review
Priors in Bayesian Deep Learning: A Review
Vincent Fortuin
UQCV
BDL
29
124
0
14 May 2021
A Neural Pre-Conditioning Active Learning Algorithm to Reduce Label
  Complexity
A Neural Pre-Conditioning Active Learning Algorithm to Reduce Label Complexity
Seo Taek Kong
Soomin Jeon
Dongbin Na
Jaewon Lee
Honglak Lee
Kyu-Hwan Jung
17
6
0
08 Apr 2021
Dataset Meta-Learning from Kernel Ridge-Regression
Dataset Meta-Learning from Kernel Ridge-Regression
Timothy Nguyen
Zhourung Chen
Jaehoon Lee
DD
36
238
0
30 Oct 2020
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
18
61
0
03 Aug 2020
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Greg Yang
40
134
0
25 Jun 2020
On the Preservation of Spatio-temporal Information in Machine Learning
  Applications
On the Preservation of Spatio-temporal Information in Machine Learning Applications
Yigit Oktar
Mehmet Türkan
8
1
0
15 Jun 2020
To Each Optimizer a Norm, To Each Norm its Generalization
To Each Optimizer a Norm, To Each Norm its Generalization
Sharan Vaswani
Reza Babanezhad
Jose Gallego
Aaron Mishkin
Simon Lacoste-Julien
Nicolas Le Roux
24
8
0
11 Jun 2020
Modularizing Deep Learning via Pairwise Learning With Kernels
Modularizing Deep Learning via Pairwise Learning With Kernels
Shiyu Duan
Shujian Yu
José C. Príncipe
MoMe
25
20
0
12 May 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
34
172
0
23 Apr 2020
A Close Look at Deep Learning with Small Data
A Close Look at Deep Learning with Small Data
Lorenzo Brigato
Luca Iocchi
19
139
0
28 Mar 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
On the infinite width limit of neural networks with a standard
  parameterization
On the infinite width limit of neural networks with a standard parameterization
Jascha Narain Sohl-Dickstein
Roman Novak
S. Schoenholz
Jaehoon Lee
19
47
0
21 Jan 2020
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Roman Novak
Lechao Xiao
Jiri Hron
Jaehoon Lee
Alexander A. Alemi
Jascha Narain Sohl-Dickstein
S. Schoenholz
27
224
0
05 Dec 2019
Information in Infinite Ensembles of Infinitely-Wide Neural Networks
Information in Infinite Ensembles of Infinitely-Wide Neural Networks
Ravid Shwartz-Ziv
Alexander A. Alemi
19
21
0
20 Nov 2019
Implicit Self-Regularization in Deep Neural Networks: Evidence from
  Random Matrix Theory and Implications for Learning
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
30
190
0
02 Oct 2018
1