ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.02129
  4. Cited By
How Deep Neural Networks Learn Compositional Data: The Random Hierarchy
  Model

How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model

5 July 2023
Francesco Cagnetta
Leonardo Petrini
Umberto M. Tomasini
Alessandro Favero
M. Wyart
    BDL
ArXivPDFHTML

Papers citing "How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model"

10 / 10 papers shown
Title
A distributional simplicity bias in the learning dynamics of transformers
A distributional simplicity bias in the learning dynamics of transformers
Riccardo Rende
Federica Gerace
A. Laio
Sebastian Goldt
68
8
0
17 Feb 2025
Analyzing (In)Abilities of SAEs via Formal Languages
Analyzing (In)Abilities of SAEs via Formal Languages
Abhinav Menon
Manish Shrivastava
David M. Krueger
Ekdeep Singh Lubana
35
7
0
15 Oct 2024
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot
Seok Hoan Choi
Yuxiao Wen
AI4CE
81
2
0
08 Jul 2024
U-Nets as Belief Propagation: Efficient Classification, Denoising, and
  Diffusion in Generative Hierarchical Models
U-Nets as Belief Propagation: Efficient Classification, Denoising, and Diffusion in Generative Hierarchical Models
Song Mei
3DV
AI4CE
DiffM
31
11
0
29 Apr 2024
How deep convolutional neural networks lose spatial information with
  training
How deep convolutional neural networks lose spatial information with training
Umberto M. Tomasini
Leonardo Petrini
Francesco Cagnetta
M. Wyart
27
9
0
04 Oct 2022
Data-driven emergence of convolutional structure in neural networks
Data-driven emergence of convolutional structure in neural networks
Alessandro Ingrosso
Sebastian Goldt
48
38
0
01 Feb 2022
Sifting out the features by pruning: Are convolutional networks the
  winning lottery ticket of fully connected ones?
Sifting out the features by pruning: Are convolutional networks the winning lottery ticket of fully connected ones?
Franco Pellegrini
Giulio Biroli
44
6
0
27 Apr 2021
The Intrinsic Dimension of Images and Its Impact on Learning
The Intrinsic Dimension of Images and Its Impact on Learning
Phillip E. Pope
Chen Zhu
Ahmed Abdelkader
Micah Goldblum
Tom Goldstein
183
256
0
18 Apr 2021
Towards Learning Convolutions from Scratch
Towards Learning Convolutions from Scratch
Behnam Neyshabur
SSL
214
70
0
27 Jul 2020
Geometric compression of invariant manifolds in neural nets
Geometric compression of invariant manifolds in neural nets
J. Paccolat
Leonardo Petrini
Mario Geiger
Kevin Tyloo
M. Wyart
MLT
42
34
0
22 Jul 2020
1