ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.11968
  4. Cited By
Training invariances and the low-rank phenomenon: beyond linear networks
v1v2 (latest)

Training invariances and the low-rank phenomenon: beyond linear networks

28 January 2022
Thien Le
Stefanie Jegelka
ArXiv (abs)PDFHTML

Papers citing "Training invariances and the low-rank phenomenon: beyond linear networks"

9 / 9 papers shown
Title
Symmetry in Neural Network Parameter Spaces
Symmetry in Neural Network Parameter Spaces
Bo Zhao
Robin Walters
Rose Yu
13
0
0
16 Jun 2025
GrokAlign: Geometric Characterisation and Acceleration of Grokking
GrokAlign: Geometric Characterisation and Acceleration of Grokking
Thomas Walker
Ahmed Imtiaz Humayun
Randall Balestriero
Richard G. Baraniuk
19
0
0
14 Jun 2025
The late-stage training dynamics of (stochastic) subgradient descent on homogeneous neural networks
Sholom Schechtman
Nicolas Schreuder
451
0
0
08 Feb 2025
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
367
1
0
04 Feb 2025
The Persistence of Neural Collapse Despite Low-Rank Bias: An Analytic
  Perspective Through Unconstrained Features
The Persistence of Neural Collapse Despite Low-Rank Bias: An Analytic Perspective Through Unconstrained Features
Connall Garrod
Jonathan P. Keating
65
4
0
30 Oct 2024
Deep ReLU Networks Have Surprisingly Simple Polytopes
Deep ReLU Networks Have Surprisingly Simple Polytopes
Fenglei Fan
Wei Huang
Xiang-yu Zhong
Lecheng Ruan
T. Zeng
Huan Xiong
Fei Wang
108
5
0
16 May 2023
The Asymmetric Maximum Margin Bias of Quasi-Homogeneous Neural Networks
The Asymmetric Maximum Margin Bias of Quasi-Homogeneous Neural Networks
D. Kunin
Atsushi Yamamura
Chao Ma
Surya Ganguli
79
21
0
07 Oct 2022
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear
  Functions
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions
Arthur Jacot
125
27
0
29 Sep 2022
Federated Optimization of Smooth Loss Functions
Federated Optimization of Smooth Loss Functions
Ali Jadbabaie
A. Makur
Devavrat Shah
FedML
425
7
0
06 Jan 2022
1