ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.00565
  4. Cited By
Data-driven emergence of convolutional structure in neural networks
v1v2 (latest)

Data-driven emergence of convolutional structure in neural networks

Proceedings of the National Academy of Sciences of the United States of America (PNAS), 2022
1 February 2022
Alessandro Ingrosso
Sebastian Goldt
ArXiv (abs)PDFHTMLGithub

Papers citing "Data-driven emergence of convolutional structure in neural networks"

19 / 19 papers shown
Feature learning from non-Gaussian inputs: the case of Independent Component Analysis in high dimensions
Feature learning from non-Gaussian inputs: the case of Independent Component Analysis in high dimensions
Fabiola Ricci
Lorenzo Bardone
Sebastian Goldt
OOD
497
5
0
31 Mar 2025
Nonlinear dynamics of localization in neural receptive fields
Nonlinear dynamics of localization in neural receptive fieldsNeural Information Processing Systems (NeurIPS), 2025
Leon Lufkin
Andrew M. Saxe
Erin Grant
341
2
0
28 Jan 2025
Classifying Overlapping Gaussian Mixtures in High Dimensions: From
  Optimal Classifiers to Neural Nets
Classifying Overlapping Gaussian Mixtures in High Dimensions: From Optimal Classifiers to Neural Nets
Khen Cohen
Noam Levi
Yaron Oz
BDL
270
1
0
28 May 2024
Sliding down the stairs: how correlated latent variables accelerate
  learning with neural networks
Sliding down the stairs: how correlated latent variables accelerate learning with neural networks
Lorenzo Bardone
Sebastian Goldt
329
13
0
12 Apr 2024
Learning from higher-order statistics, efficiently: hypothesis tests,
  random features, and neural networks
Learning from higher-order statistics, efficiently: hypothesis tests, random features, and neural networks
Eszter Székely
Lorenzo Bardone
Federica Gerace
Sebastian Goldt
436
3
0
22 Dec 2023
Local Kernel Renormalization as a mechanism for feature learning in
  overparametrized Convolutional Neural Networks
Local Kernel Renormalization as a mechanism for feature learning in overparametrized Convolutional Neural NetworksNature Communications (Nat. Commun.), 2023
R. Aiudi
R. Pacelli
A. Vezzani
R. Burioni
P. Rotondo
MLT
319
29
0
21 Jul 2023
Loss Dynamics of Temporal Difference Reinforcement Learning
Loss Dynamics of Temporal Difference Reinforcement LearningNeural Information Processing Systems (NeurIPS), 2023
Blake Bordelon
P. Masset
Henry Kuo
Cengiz Pehlevan
AI4CE
323
0
0
10 Jul 2023
How Deep Neural Networks Learn Compositional Data: The Random Hierarchy
  Model
How Deep Neural Networks Learn Compositional Data: The Random Hierarchy ModelPhysical Review X (PRX), 2023
Francesco Cagnetta
Leonardo Petrini
Umberto M. Tomasini
Alessandro Favero
Matthieu Wyart
BDL
605
61
0
05 Jul 2023
Mapping of attention mechanisms to a generalized Potts model
Mapping of attention mechanisms to a generalized Potts modelPhysical Review Research (Phys. Rev. Res.), 2023
Riccardo Rende
Federica Gerace
Alessandro Laio
Sebastian Goldt
479
40
0
14 Apr 2023
Inversion dynamics of class manifolds in deep learning reveals tradeoffs
  underlying generalisation
Inversion dynamics of class manifolds in deep learning reveals tradeoffs underlying generalisation
Simone Ciceri
Lorenzo Cassani
Matteo Osella
P. Rotondo
P. Pizzochero
M. Gherardi
369
8
0
09 Mar 2023
Optimal transfer protocol by incremental layer defrosting
Optimal transfer protocol by incremental layer defrosting
Federica Gerace
Diego Doimo
Stefano Sarao Mannelli
Luca Saglietti
Alessandro Laio
CLL
203
3
0
02 Mar 2023
Are Gaussian data all you need? Extents and limits of universality in
  high-dimensional generalized linear estimation
Are Gaussian data all you need? Extents and limits of universality in high-dimensional generalized linear estimation
Luca Pesce
Florent Krzakala
Bruno Loureiro
Ludovic Stephan
291
32
0
17 Feb 2023
Neural networks trained with SGD learn distributions of increasing
  complexity
Neural networks trained with SGD learn distributions of increasing complexityInternational Conference on Machine Learning (ICML), 2022
Maria Refinetti
Alessandro Ingrosso
Sebastian Goldt
UQCV
406
58
0
21 Nov 2022
A simple probabilistic neural network for machine understanding
A simple probabilistic neural network for machine understandingJournal of Statistical Mechanics: Theory and Experiment (JSTAT), 2022
Rongrong Xie
M. Marsili
OCLAI4CE
286
4
0
24 Oct 2022
What Can Be Learnt With Wide Convolutional Neural Networks?
What Can Be Learnt With Wide Convolutional Neural Networks?International Conference on Machine Learning (ICML), 2022
Francesco Cagnetta
Alessandro Favero
Matthieu Wyart
MLT
674
16
0
01 Aug 2022
Synergy and Symmetry in Deep Learning: Interactions between the Data,
  Model, and Inference Algorithm
Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference AlgorithmInternational Conference on Machine Learning (ICML), 2022
Lechao Xiao
Jeffrey Pennington
277
12
0
11 Jul 2022
Learning sparse features can lead to overfitting in neural networks
Learning sparse features can lead to overfitting in neural networksNeural Information Processing Systems (NeurIPS), 2022
Leonardo Petrini
Francesco Cagnetta
Eric Vanden-Eijnden
Matthieu Wyart
MLT
344
40
0
24 Jun 2022
The impact of memory on learning sequence-to-sequence tasks
The impact of memory on learning sequence-to-sequence tasks
Alireza Seif
S. Loos
Gennaro Tucci
É. Roldán
Sebastian Goldt
249
6
0
29 May 2022
Gaussian Universality of Perceptrons with Random Labels
Gaussian Universality of Perceptrons with Random LabelsPhysical Review E (Phys. Rev. E), 2022
Federica Gerace
Florent Krzakala
Bruno Loureiro
Ludovic Stephan
Lenka Zdeborová
365
30
0
26 May 2022
1
Page 1 of 1