ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.11027
  4. Cited By
More is Less: Inducing Sparsity via Overparameterization

More is Less: Inducing Sparsity via Overparameterization

21 December 2021
H. Chou
J. Maly
Holger Rauhut
ArXivPDFHTML

Papers citing "More is Less: Inducing Sparsity via Overparameterization"

10 / 10 papers shown
Title
Entropic Mirror Descent for Linear Systems: Polyak's Stepsize and Implicit Bias
Entropic Mirror Descent for Linear Systems: Polyak's Stepsize and Implicit Bias
Yura Malitsky
Alexander Posch
27
0
0
05 May 2025
Optimization Insights into Deep Diagonal Linear Networks
Optimization Insights into Deep Diagonal Linear Networks
Hippolyte Labarrière
C. Molinari
Lorenzo Rosasco
S. Villa
Cristian Vega
76
0
0
21 Dec 2024
Convex optimization over a probability simplex
Convex optimization over a probability simplex
James Chok
G. Vasil
25
2
0
15 May 2023
Robust Implicit Regularization via Weight Normalization
Robust Implicit Regularization via Weight Normalization
H. Chou
Holger Rauhut
Rachel A. Ward
30
7
0
09 May 2023
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient
  Correction
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Shuren He
Bani Mallick
29
6
0
09 Jan 2023
Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the
  Optimization Landscape Around the True Solution
Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution
Jianhao Ma
S. Fattahi
44
5
0
15 Jul 2022
Robust Training under Label Noise by Over-parameterization
Robust Training under Label Noise by Over-parameterization
Sheng Liu
Zhihui Zhu
Qing Qu
Chong You
NoLa
OOD
27
106
0
28 Feb 2022
Implicit Regularization in Hierarchical Tensor Factorization and Deep
  Convolutional Neural Networks
Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks
Noam Razin
Asaf Maman
Nadav Cohen
46
29
0
27 Jan 2022
Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect
Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect
Yuqing Wang
Minshuo Chen
T. Zhao
Molei Tao
AI4CE
57
40
0
07 Oct 2021
A Continuous-Time Mirror Descent Approach to Sparse Phase Retrieval
A Continuous-Time Mirror Descent Approach to Sparse Phase Retrieval
Fan Wu
Patrick Rebeschini
37
14
0
20 Oct 2020
1