ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.08572
  4. Cited By
Width Provably Matters in Optimization for Deep Linear Neural Networks

Width Provably Matters in Optimization for Deep Linear Neural Networks

24 January 2019
S. Du
Wei Hu
ArXivPDFHTML

Papers citing "Width Provably Matters in Optimization for Deep Linear Neural Networks"

20 / 20 papers shown
Title
From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks
From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks
Clémentine Dominé
Nicolas Anguita
A. Proca
Lukas Braun
D. Kunin
P. Mediano
Andrew M. Saxe
32
3
0
22 Sep 2024
Compressible Dynamics in Deep Overparameterized Low-Rank Learning &
  Adaptation
Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation
Can Yaras
Peng Wang
Laura Balzano
Qing Qu
AI4CE
37
12
0
06 Jun 2024
NTK-Guided Few-Shot Class Incremental Learning
NTK-Guided Few-Shot Class Incremental Learning
Jingren Liu
Zhong Ji
Yanwei Pang
Yunlong Yu
CLL
34
3
0
19 Mar 2024
On a continuous time model of gradient descent dynamics and instability
  in deep learning
On a continuous time model of gradient descent dynamics and instability in deep learning
Mihaela Rosca
Yan Wu
Chongli Qin
Benoit Dherin
16
6
0
03 Feb 2023
Infinite-width limit of deep linear neural networks
Infinite-width limit of deep linear neural networks
Lénaïc Chizat
Maria Colombo
Xavier Fernández-Real
Alessio Figalli
31
14
0
29 Nov 2022
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Niladri S. Chatterji
Philip M. Long
17
8
0
19 Sep 2022
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
Andrew M. Saxe
Shagun Sodhani
Sam Lewallen
AI4CE
28
34
0
21 Jul 2022
Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the
  Optimization Landscape Around the True Solution
Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution
Jianhao Ma
S. Fattahi
40
5
0
15 Jul 2022
Imitating Deep Learning Dynamics via Locally Elastic Stochastic
  Differential Equations
Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations
Jiayao Zhang
Hua Wang
Weijie J. Su
32
7
0
11 Oct 2021
Towards Demystifying Representation Learning with Non-contrastive
  Self-supervision
Towards Demystifying Representation Learning with Non-contrastive Self-supervision
Xiang Wang
Xinlei Chen
S. Du
Yuandong Tian
SSL
16
26
0
11 Oct 2021
The loss landscape of deep linear neural networks: a second-order
  analysis
The loss landscape of deep linear neural networks: a second-order analysis
E. M. Achour
Franccois Malgouyres
Sébastien Gerchinovitz
ODL
22
9
0
28 Jul 2021
Understanding self-supervised Learning Dynamics without Contrastive
  Pairs
Understanding self-supervised Learning Dynamics without Contrastive Pairs
Yuandong Tian
Xinlei Chen
Surya Ganguli
SSL
138
279
0
12 Feb 2021
The Connection Between Approximation, Depth Separation and Learnability
  in Neural Networks
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
13
20
0
31 Jan 2021
Deep matrix factorizations
Deep matrix factorizations
Pierre De Handschutter
Nicolas Gillis
Xavier Siebert
BDL
28
40
0
01 Oct 2020
Analysis of Knowledge Transfer in Kernel Regime
Analysis of Knowledge Transfer in Kernel Regime
Arman Rahbar
Ashkan Panahi
Chiranjib Bhattacharyya
Devdatt Dubhashi
M. Chehreghani
18
3
0
30 Mar 2020
Global Convergence of Gradient Descent for Deep Linear Residual Networks
Global Convergence of Gradient Descent for Deep Linear Residual Networks
Lei Wu
Qingcan Wang
Chao Ma
ODL
AI4CE
20
22
0
02 Nov 2019
Implicit Regularization in Deep Matrix Factorization
Implicit Regularization in Deep Matrix Factorization
Sanjeev Arora
Nadav Cohen
Wei Hu
Yuping Luo
AI4CE
24
491
0
31 May 2019
On Exact Computation with an Infinitely Wide Neural Net
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
24
899
0
26 Apr 2019
Analysis of the Gradient Descent Algorithm for a Deep Neural Network
  Model with Skip-connections
Analysis of the Gradient Descent Algorithm for a Deep Neural Network Model with Skip-connections
E. Weinan
Chao Ma
Qingcan Wang
Lei Wu
MLT
24
22
0
10 Apr 2019
Global optimality conditions for deep neural networks
Global optimality conditions for deep neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
121
117
0
08 Jul 2017
1