ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.06770
  4. Cited By
What can linearized neural networks actually say about generalization?

What can linearized neural networks actually say about generalization?

12 June 2021
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
ArXivPDFHTML

Papers citing "What can linearized neural networks actually say about generalization?"

14 / 14 papers shown
Title
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Taesun Yeom
Sangyoon Lee
Jaeho Lee
53
2
0
07 Oct 2024
Neural Lineage
Neural Lineage
Runpeng Yu
Xinchao Wang
26
4
0
17 Jun 2024
Dropout MPC: An Ensemble Neural MPC Approach for Systems with Learned
  Dynamics
Dropout MPC: An Ensemble Neural MPC Approach for Systems with Learned Dynamics
Spyridon Syntakas
K. Vlachos
38
0
0
04 Jun 2024
Modify Training Directions in Function Space to Reduce Generalization
  Error
Modify Training Directions in Function Space to Reduce Generalization Error
Yi Yu
Wenlian Lu
Boyu Chen
11
0
0
25 Jul 2023
Supervision Complexity and its Role in Knowledge Distillation
Supervision Complexity and its Role in Knowledge Distillation
Hrayr Harutyunyan
A. S. Rawat
A. Menon
Seungyeon Kim
Surinder Kumar
22
12
0
28 Jan 2023
Evolution of Neural Tangent Kernels under Benign and Adversarial
  Training
Evolution of Neural Tangent Kernels under Benign and Adversarial Training
Noel Loo
Ramin Hasani
Alexander Amini
Daniela Rus
AAML
24
12
0
21 Oct 2022
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
Nikolaos Tsilivis
Julia Kempe
AAML
31
16
0
11 Oct 2022
Lazy vs hasty: linearization in deep networks impacts learning schedule
  based on example difficulty
Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty
Thomas George
Guillaume Lajoie
A. Baratin
18
5
0
19 Sep 2022
Can we achieve robustness from data alone?
Can we achieve robustness from data alone?
Nikolaos Tsilivis
Jingtong Su
Julia Kempe
OOD
DD
36
18
0
24 Jul 2022
Learning sparse features can lead to overfitting in neural networks
Learning sparse features can lead to overfitting in neural networks
Leonardo Petrini
Francesco Cagnetta
Eric Vanden-Eijnden
M. Wyart
MLT
25
23
0
24 Jun 2022
A Structured Dictionary Perspective on Implicit Neural Representations
A Structured Dictionary Perspective on Implicit Neural Representations
Gizem Yüce
Guillermo Ortiz-Jiménez
Beril Besbinar
P. Frossard
24
86
0
03 Dec 2021
A linearized framework and a new benchmark for model selection for
  fine-tuning
A linearized framework and a new benchmark for model selection for fine-tuning
Aditya Deshpande
Alessandro Achille
Avinash Ravichandran
Hao Li
L. Zancato
Charless C. Fowlkes
Rahul Bhotika
Stefano Soatto
Pietro Perona
ALM
107
46
0
29 Jan 2021
A Unified Paths Perspective for Pruning at Initialization
A Unified Paths Perspective for Pruning at Initialization
Thomas Gebhart
Udit Saxena
Paul Schrater
33
14
0
26 Jan 2021
Geometric compression of invariant manifolds in neural nets
Geometric compression of invariant manifolds in neural nets
J. Paccolat
Leonardo Petrini
Mario Geiger
Kevin Tyloo
M. Wyart
MLT
47
34
0
22 Jul 2020
1