ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.13083
  4. Cited By
Why neural networks find simple solutions: the many regularizers of
  geometric complexity

Why neural networks find simple solutions: the many regularizers of geometric complexity

27 September 2022
Benoit Dherin
Michael Munn
M. Rosca
David Barrett
ArXivPDFHTML

Papers citing "Why neural networks find simple solutions: the many regularizers of geometric complexity"

10 / 10 papers shown
Title
A Margin-based Multiclass Generalization Bound via Geometric Complexity
A Margin-based Multiclass Generalization Bound via Geometric Complexity
Michael Munn
Benoit Dherin
Javier Gonzalvo
UQCV
27
2
0
28 May 2024
From Robustness to Improved Generalization and Calibration in
  Pre-trained Language Models
From Robustness to Improved Generalization and Calibration in Pre-trained Language Models
Josip Jukić
Jan Snajder
21
0
0
31 Mar 2024
Neural Redshift: Random Networks are not Random Functions
Neural Redshift: Random Networks are not Random Functions
Damien Teney
A. Nicolicioiu
Valentin Hartmann
Ehsan Abbasnejad
86
18
0
04 Mar 2024
Why Does Little Robustness Help? Understanding and Improving Adversarial
  Transferability from Surrogate Training
Why Does Little Robustness Help? Understanding and Improving Adversarial Transferability from Surrogate Training
Yechao Zhang
Shengshan Hu
Leo Yu Zhang
Junyu Shi
Minghui Li
Xiaogeng Liu
Wei Wan
Hai Jin
AAML
22
20
0
15 Jul 2023
Mathematical Challenges in Deep Learning
Mathematical Challenges in Deep Learning
V. Nia
Guojun Zhang
I. Kobyzev
Michael R. Metel
Xinlin Li
...
S. Hemati
M. Asgharian
Linglong Kong
Wulong Liu
Boxing Chen
AI4CE
VLM
22
1
0
24 Mar 2023
On the Lipschitz Constant of Deep Networks and Double Descent
On the Lipschitz Constant of Deep Networks and Double Descent
Matteo Gamba
Hossein Azizpour
Marten Bjorkman
16
6
0
28 Jan 2023
Deep Double Descent via Smooth Interpolation
Deep Double Descent via Smooth Interpolation
Matteo Gamba
Erik Englesson
Marten Bjorkman
Hossein Azizpour
51
10
0
21 Sep 2022
Stochastic Training is Not Necessary for Generalization
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
81
72
0
29 Sep 2021
Geometric deep learning: going beyond Euclidean data
Geometric deep learning: going beyond Euclidean data
M. Bronstein
Joan Bruna
Yann LeCun
Arthur Szlam
P. Vandergheynst
GNN
231
3,202
0
24 Nov 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,696
0
15 Sep 2016
1