ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.15809
  4. Cited By
Feature Learning in $L_{2}$-regularized DNNs: Attraction/Repulsion and
  Sparsity
v1v2 (latest)

Feature Learning in L2L_{2}L2​-regularized DNNs: Attraction/Repulsion and Sparsity

31 May 2022
Arthur Jacot
Eugene Golikov
Clément Hongler
Franck Gabriel
    MLT
ArXiv (abs)PDFHTML

Papers citing "Feature Learning in $L_{2}$-regularized DNNs: Attraction/Repulsion and Sparsity"

13 / 13 papers shown
Title
Geometry of Learning -- L2 Phase Transitions in Deep and Shallow Neural Networks
Geometry of Learning -- L2 Phase Transitions in Deep and Shallow Neural Networks
Ibrahim Talha Ersoy
Karoline Wiesner
75
0
0
10 May 2025
On the Cone Effect in the Learning Dynamics
On the Cone Effect in the Learning Dynamics
Zhanpeng Zhou
Yongyi Yang
Jie Ren
Mahito Sugiyama
Junchi Yan
116
0
0
20 Mar 2025
Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural
  Collapse
Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse
Arthur Jacot
Peter Súkeník
Zihan Wang
Marco Mondelli
66
2
0
07 Oct 2024
Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes
Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes
Zhenfeng Tu
Santiago Aranguri
Arthur Jacot
63
11
0
27 May 2024
Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky ResNets
Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky ResNets
Arthur Jacot
Alexandre Kaiser
72
1
0
27 May 2024
How do Minimum-Norm Shallow Denoisers Look in Function Space?
How do Minimum-Norm Shallow Denoisers Look in Function Space?
Chen Zeno
Greg Ongie
Yaniv Blumenfeld
Nir Weinberger
Daniel Soudry
78
8
0
12 Nov 2023
Mechanism of feature learning in convolutional neural networks
Mechanism of feature learning in convolutional neural networks
Daniel Beaglehole
Adityanarayanan Radhakrishnan
Parthe Pandit
Misha Belkin
FAttMLT
105
14
0
01 Sep 2023
Bottleneck Structure in Learned Features: Low-Dimension vs Regularity
  Tradeoff
Bottleneck Structure in Learned Features: Low-Dimension vs Regularity Tradeoff
Arthur Jacot
MLT
120
14
0
30 May 2023
Variation Spaces for Multi-Output Neural Networks: Insights on
  Multi-Task Learning and Network Compression
Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression
Joseph Shenouda
Rahul Parhi
Kangwook Lee
Robert D. Nowak
114
13
0
25 May 2023
Implicit bias of SGD in $L_{2}$-regularized linear DNNs: One-way jumps
  from high to low rank
Implicit bias of SGD in L2L_{2}L2​-regularized linear DNNs: One-way jumps from high to low rank
Zihan Wang
Arthur Jacot
93
21
0
25 May 2023
On the Stepwise Nature of Self-Supervised Learning
On the Stepwise Nature of Self-Supervised Learning
James B. Simon
Maksis Knutins
Liu Ziyin
Daniel Geisz
Abraham J. Fetterman
Joshua Albrecht
SSL
87
35
0
27 Mar 2023
PathProx: A Proximal Gradient Algorithm for Weight Decay Regularized
  Deep Neural Networks
PathProx: A Proximal Gradient Algorithm for Weight Decay Regularized Deep Neural Networks
Liu Yang
Jifan Zhang
Joseph Shenouda
Dimitris Papailiopoulos
Kangwook Lee
Robert D. Nowak
145
1
0
06 Oct 2022
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear
  Functions
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions
Arthur Jacot
127
27
0
29 Sep 2022
1