Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.10727
Cited By
How Deep Networks Learn Sparse and Hierarchical Data: the Sparse Random Hierarchy Model
16 April 2024
Umberto M. Tomasini
M. Wyart
BDL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How Deep Networks Learn Sparse and Hierarchical Data: the Sparse Random Hierarchy Model"
7 / 7 papers shown
Title
U-Nets as Belief Propagation: Efficient Classification, Denoising, and Diffusion in Generative Hierarchical Models
Song Mei
3DV
AI4CE
DiffM
36
11
0
29 Apr 2024
How deep convolutional neural networks lose spatial information with training
Umberto M. Tomasini
Leonardo Petrini
Francesco Cagnetta
M. Wyart
35
9
0
04 Oct 2022
A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups
Marc Finzi
Max Welling
A. Wilson
71
185
0
19 Apr 2021
Learning with invariances in random features and kernel models
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
44
89
0
25 Feb 2021
E(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials
Simon L. Batzner
Albert Musaelian
Lixin Sun
Mario Geiger
J. Mailoa
M. Kornbluth
N. Molinari
Tess E. Smidt
Boris Kozinsky
198
1,232
0
08 Jan 2021
On Translation Invariance in CNNs: Convolutional Layers can Exploit Absolute Spatial Location
O. Kayhan
J. C. V. Gemert
209
232
0
16 Mar 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,460
0
23 Jan 2020
1