ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.06262
  4. Cited By
Why Do Deep Residual Networks Generalize Better than Deep Feedforward
  Networks? -- A Neural Tangent Kernel Perspective

Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? -- A Neural Tangent Kernel Perspective

14 February 2020
Kaixuan Huang
Yuqing Wang
Molei Tao
T. Zhao
    MLT
ArXivPDFHTML

Papers citing "Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? -- A Neural Tangent Kernel Perspective"

13 / 13 papers shown
Title
Fast and Exact Enumeration of Deep Networks Partitions Regions
Fast and Exact Enumeration of Deep Networks Partitions Regions
Randall Balestriero
Yann LeCun
16
5
0
20 Jan 2024
On the Neural Tangent Kernel of Equilibrium Models
On the Neural Tangent Kernel of Equilibrium Models
Zhili Feng
J. Zico Kolter
16
6
0
21 Oct 2023
The Interpolating Information Criterion for Overparameterized Models
The Interpolating Information Criterion for Overparameterized Models
Liam Hodgkinson
Christopher van der Heide
Roberto Salomone
Fred Roosta
Michael W. Mahoney
20
7
0
15 Jul 2023
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Eshaan Nichani
Alexandru Damian
Jason D. Lee
MLT
36
13
0
11 May 2023
Dynamical systems' based neural networks
Dynamical systems' based neural networks
E. Celledoni
Davide Murari
B. Owren
Carola-Bibiane Schönlieb
Ferdia Sherry
OOD
40
10
0
05 Oct 2022
Analyzing Tree Architectures in Ensembles via Neural Tangent Kernel
Analyzing Tree Architectures in Ensembles via Neural Tangent Kernel
Ryuichi Kanoh
M. Sugiyama
20
2
0
25 May 2022
Wide and Deep Neural Networks Achieve Optimality for Classification
Wide and Deep Neural Networks Achieve Optimality for Classification
Adityanarayanan Radhakrishnan
M. Belkin
Caroline Uhler
16
18
0
29 Apr 2022
Generalization Through The Lens Of Leave-One-Out Error
Generalization Through The Lens Of Leave-One-Out Error
Gregor Bachmann
Thomas Hofmann
Aurélien Lucchi
46
7
0
07 Mar 2022
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
36
16
0
05 Dec 2021
A Neural Tangent Kernel Perspective of GANs
A Neural Tangent Kernel Perspective of GANs
Jean-Yves Franceschi
Emmanuel de Bézenac
Ibrahim Ayed
Mickaël Chen
Sylvain Lamprier
Patrick Gallinari
31
26
0
10 Jun 2021
Generalization Guarantees for Neural Architecture Search with
  Train-Validation Split
Generalization Guarantees for Neural Architecture Search with Train-Validation Split
Samet Oymak
Mingchen Li
Mahdi Soltanolkotabi
AI4CE
OOD
31
13
0
29 Apr 2021
Experiments with Rich Regime Training for Deep Learning
Experiments with Rich Regime Training for Deep Learning
Xinyan Li
A. Banerjee
29
2
0
26 Feb 2021
Gradient Starvation: A Learning Proclivity in Neural Networks
Gradient Starvation: A Learning Proclivity in Neural Networks
Mohammad Pezeshki
Sekouba Kaba
Yoshua Bengio
Aaron Courville
Doina Precup
Guillaume Lajoie
MLT
45
257
0
18 Nov 2020
1