Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2307.02129
Cited By
How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model
5 July 2023
Francesco Cagnetta
Leonardo Petrini
Umberto M. Tomasini
Alessandro Favero
M. Wyart
BDL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model"
10 / 10 papers shown
Title
A distributional simplicity bias in the learning dynamics of transformers
Riccardo Rende
Federica Gerace
A. Laio
Sebastian Goldt
68
8
0
17 Feb 2025
Analyzing (In)Abilities of SAEs via Formal Languages
Abhinav Menon
Manish Shrivastava
David M. Krueger
Ekdeep Singh Lubana
35
7
0
15 Oct 2024
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot
Seok Hoan Choi
Yuxiao Wen
AI4CE
78
2
0
08 Jul 2024
U-Nets as Belief Propagation: Efficient Classification, Denoising, and Diffusion in Generative Hierarchical Models
Song Mei
3DV
AI4CE
DiffM
31
11
0
29 Apr 2024
How deep convolutional neural networks lose spatial information with training
Umberto M. Tomasini
Leonardo Petrini
Francesco Cagnetta
M. Wyart
25
9
0
04 Oct 2022
Data-driven emergence of convolutional structure in neural networks
Alessandro Ingrosso
Sebastian Goldt
48
38
0
01 Feb 2022
Sifting out the features by pruning: Are convolutional networks the winning lottery ticket of fully connected ones?
Franco Pellegrini
Giulio Biroli
42
6
0
27 Apr 2021
The Intrinsic Dimension of Images and Its Impact on Learning
Phillip E. Pope
Chen Zhu
Ahmed Abdelkader
Micah Goldblum
Tom Goldstein
180
256
0
18 Apr 2021
Towards Learning Convolutions from Scratch
Behnam Neyshabur
SSL
214
70
0
27 Jul 2020
Geometric compression of invariant manifolds in neural nets
J. Paccolat
Leonardo Petrini
Mario Geiger
Kevin Tyloo
M. Wyart
MLT
39
34
0
22 Jul 2020
1