ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.02468
  4. Cited By
Relative stability toward diffeomorphisms indicates performance in deep
  nets

Relative stability toward diffeomorphisms indicates performance in deep nets

6 May 2021
Leonardo Petrini
Alessandro Favero
Mario Geiger
M. Wyart
    OOD
ArXivPDFHTML

Papers citing "Relative stability toward diffeomorphisms indicates performance in deep nets"

14 / 14 papers shown
Title
How Deep Networks Learn Sparse and Hierarchical Data: the Sparse Random
  Hierarchy Model
How Deep Networks Learn Sparse and Hierarchical Data: the Sparse Random Hierarchy Model
Umberto M. Tomasini
M. Wyart
BDL
23
7
0
16 Apr 2024
Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning
Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning
Yuxiao Wen
Arthur Jacot
47
6
0
12 Feb 2024
How Deep Neural Networks Learn Compositional Data: The Random Hierarchy
  Model
How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model
Francesco Cagnetta
Leonardo Petrini
Umberto M. Tomasini
Alessandro Favero
M. Wyart
BDL
27
22
0
05 Jul 2023
How deep convolutional neural networks lose spatial information with
  training
How deep convolutional neural networks lose spatial information with training
Umberto M. Tomasini
Leonardo Petrini
Francesco Cagnetta
M. Wyart
33
9
0
04 Oct 2022
Automatic Data Augmentation via Invariance-Constrained Learning
Automatic Data Augmentation via Invariance-Constrained Learning
Ignacio Hounie
Luiz F. O. Chamon
Alejandro Ribeiro
18
10
0
29 Sep 2022
What Can Be Learnt With Wide Convolutional Neural Networks?
What Can Be Learnt With Wide Convolutional Neural Networks?
Francesco Cagnetta
Alessandro Favero
M. Wyart
MLT
25
11
0
01 Aug 2022
Synergy and Symmetry in Deep Learning: Interactions between the Data,
  Model, and Inference Algorithm
Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Lechao Xiao
Jeffrey Pennington
21
10
0
11 Jul 2022
Learning sparse features can lead to overfitting in neural networks
Learning sparse features can lead to overfitting in neural networks
Leonardo Petrini
Francesco Cagnetta
Eric Vanden-Eijnden
M. Wyart
MLT
25
23
0
24 Jun 2022
Data augmentation with mixtures of max-entropy transformations for
  filling-level classification
Data augmentation with mixtures of max-entropy transformations for filling-level classification
Apostolos Modas
Andrea Cavallaro
P. Frossard
12
0
0
08 Mar 2022
Measuring dissimilarity with diffeomorphism invariance
Measuring dissimilarity with diffeomorphism invariance
Théophile Cantelobre
C. Ciliberto
Benjamin Guedj
Alessandro Rudi
11
1
0
11 Feb 2022
PRIME: A few primitives can boost robustness to common corruptions
PRIME: A few primitives can boost robustness to common corruptions
Apostolos Modas
Rahul Rade
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
16
41
0
27 Dec 2021
On the Sample Complexity of Learning under Invariance and Geometric
  Stability
On the Sample Complexity of Learning under Invariance and Geometric Stability
A. Bietti
Luca Venturi
Joan Bruna
19
5
0
14 Jun 2021
Geometric compression of invariant manifolds in neural nets
Geometric compression of invariant manifolds in neural nets
J. Paccolat
Leonardo Petrini
Mario Geiger
Kevin Tyloo
M. Wyart
MLT
47
34
0
22 Jul 2020
On Translation Invariance in CNNs: Convolutional Layers can Exploit
  Absolute Spatial Location
On Translation Invariance in CNNs: Convolutional Layers can Exploit Absolute Spatial Location
O. Kayhan
J. C. V. Gemert
209
232
0
16 Mar 2020
1