ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.01154
  4. Cited By
The Law of Parsimony in Gradient Descent for Learning Deep Linear
  Networks

The Law of Parsimony in Gradient Descent for Learning Deep Linear Networks

1 June 2023
Can Yaras
P. Wang
Wei Hu
Zhihui Zhu
Laura Balzano
Qing Qu
ArXivPDFHTML

Papers citing "The Law of Parsimony in Gradient Descent for Learning Deep Linear Networks"

5 / 5 papers shown
Title
Embedded Visual Prompt Tuning
Embedded Visual Prompt Tuning
Wenqiang Zu
Shenghao Xie
Qing Zhao
Guoqi Li
Lei Ma
VLM
MedIm
44
9
0
01 Jul 2024
Early Neuron Alignment in Two-layer ReLU Networks with Small
  Initialization
Early Neuron Alignment in Two-layer ReLU Networks with Small Initialization
Hancheng Min
Enrique Mallada
René Vidal
MLT
30
19
0
24 Jul 2023
Perturbation Analysis of Neural Collapse
Perturbation Analysis of Neural Collapse
Tom Tirer
Haoxiang Huang
Jonathan Niles-Weed
AAML
30
23
0
29 Oct 2022
Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse
  in Imbalanced Training
Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced Training
Cong Fang
Hangfeng He
Qi Long
Weijie J. Su
FAtt
117
164
0
29 Jan 2021
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
220
347
0
14 Jun 2018
1