ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.06670
  4. Cited By
Do deep neural networks have an inbuilt Occam's razor?

Do deep neural networks have an inbuilt Occam's razor?

13 April 2023
Chris Mingard
Henry Rees
Guillermo Valle Pérez
A. Louis
    UQCV
    BDL
ArXivPDFHTML

Papers citing "Do deep neural networks have an inbuilt Occam's razor?"

11 / 11 papers shown
Title
Do We Always Need the Simplicity Bias? Looking for Optimal Inductive Biases in the Wild
Damien Teney
Liangze Jiang
Florin Gogianu
Ehsan Abbasnejad
76
0
0
13 Mar 2025
Do Influence Functions Work on Large Language Models?
Do Influence Functions Work on Large Language Models?
Zhe Li
Wei Zhao
Yige Li
Jun Sun
TDI
23
1
0
30 Sep 2024
Exploiting the equivalence between quantum neural networks and
  perceptrons
Exploiting the equivalence between quantum neural networks and perceptrons
Chris Mingard
Jessica Pointing
Charles London
Yoonsoo Nam
Ard A. Louis
19
2
0
05 Jul 2024
Do Quantum Neural Networks have Simplicity Bias?
Do Quantum Neural Networks have Simplicity Bias?
Jessica Pointing
AI4CE
21
2
0
03 Jul 2024
Early learning of the optimal constant solution in neural networks and
  humans
Early learning of the optimal constant solution in neural networks and humans
Jirko Rubruck
Jan P. Bauer
Andrew M. Saxe
Christopher Summerfield
22
1
0
25 Jun 2024
Learning Universal Predictors
Learning Universal Predictors
Jordi Grau-Moya
Tim Genewein
Marcus Hutter
Laurent Orseau
Grégoire Delétang
...
Anian Ruoss
Wenliang Kevin Li
Christopher Mattern
Matthew Aitchison
J. Veness
8
11
0
26 Jan 2024
Simplicity bias, algorithmic probability, and the random logistic map
Simplicity bias, algorithmic probability, and the random logistic map
B. Hamzi
K. Dingle
9
3
0
31 Dec 2023
In-Context Learning through the Bayesian Prism
In-Context Learning through the Bayesian Prism
Madhuri Panwar
Kabir Ahuja
Navin Goyal
BDL
17
38
0
08 Jun 2023
Stochastic Training is Not Necessary for Generalization
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
76
72
0
29 Sep 2021
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
C. Pehlevan
125
199
0
07 Feb 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,696
0
15 Sep 2016
1