Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1909.11522
Cited By
v1
v2
v3 (latest)
Neural networks are a priori biased towards Boolean functions with low entropy
25 September 2019
Chris Mingard
Joar Skalse
Guillermo Valle Pérez
David Martínez-Rubio
Vladimir Mikulik
A. Louis
FAtt
AI4CE
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Neural networks are a priori biased towards Boolean functions with low entropy"
18 / 18 papers shown
A Modern Look at Simplicity Bias in Image Classification Tasks
Xiaoguang Chang
Teng Wang
Changyin Sun
AAML
164
0
0
13 Sep 2025
Characterising the Inductive Biases of Neural Networks on Boolean Data
Chris Mingard
Lukas Seier
Niclas Goring
Andrei-Vlad Badelita
Charles London
Ard A. Louis
AI4CE
311
1
0
29 May 2025
Can Large Reasoning Models Self-Train?
Sheikh Shafayat
Fahim Tajwar
Ruslan Salakhutdinov
J. Schneider
Andrea Zanette
ReLM
OffRL
LRM
615
37
0
27 May 2025
SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning
International Conference on Learning Representations (ICLR), 2024
Hojoon Lee
Dongyoon Hwang
Donghu Kim
Hyunseung Kim
Jun Jet Tai
K. Subramanian
Peter R. Wurman
Jaegul Choo
Peter Stone
Takuma Seno
OffRL
578
62
0
13 Oct 2024
Neural Redshift: Random Networks are not Random Functions
Damien Teney
A. Nicolicioiu
Valentin Hartmann
Ehsan Abbasnejad
758
41
0
04 Mar 2024
Simplicity bias, algorithmic probability, and the random logistic map
B. Hamzi
K. Dingle
306
10
0
31 Dec 2023
Points of non-linearity of functions generated by random neural networks
David Holmes
228
0
0
19 Apr 2023
Deep neural networks have an inbuilt Occam's razor
Nature Communications (Nat. Commun.), 2023
Chris Mingard
Henry Rees
Guillermo Valle Pérez
A. Louis
UQCV
BDL
348
16
0
13 Apr 2023
The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning
International Conference on Machine Learning (ICML), 2023
Micah Goldblum
Marc Finzi
K. Rowan
A. Wilson
UQCV
FedML
696
73
0
11 Apr 2023
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
Annual Meeting of the Association for Computational Linguistics (ACL), 2022
S. Bhattamishra
Arkil Patel
Varun Kanade
Phil Blunsom
535
69
0
22 Nov 2022
A law of adversarial risk, interpolation, and label noise
International Conference on Learning Representations (ICLR), 2022
Daniel Paleka
Amartya Sanyal
NoLa
AAML
418
10
0
08 Jul 2022
Overview frequency principle/spectral bias in deep learning
Communication on Applied Mathematics and Computation (CAMC), 2022
Z. Xu
Yaoyu Zhang
Yaoyu Zhang
FaML
526
140
0
19 Jan 2022
Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Yaoyu Zhang
Yuqing Li
Zhongwang Zhang
Yaoyu Zhang
Z. Xu
245
26
0
30 Nov 2021
Embedding Principle of Loss Landscape of Deep Neural Networks
Neural Information Processing Systems (NeurIPS), 2021
Yaoyu Zhang
Zhongwang Zhang
Yaoyu Zhang
Z. Xu
290
44
0
30 May 2021
Double-descent curves in neural networks: a new perspective using Gaussian processes
AAAI Conference on Artificial Intelligence (AAAI), 2021
Ouns El Harzli
Bernardo Cuenca Grau
Guillermo Valle Pérez
A. Louis
526
6
0
14 Feb 2021
On the exact computation of linear frequency principle dynamics and its generalization
Yaoyu Zhang
Zheng Ma
Z. Xu
Yaoyu Zhang
245
24
0
15 Oct 2020
Deep frequency principle towards understanding why deeper learning is faster
AAAI Conference on Artificial Intelligence (AAAI), 2020
Zhi-Qin John Xu
Hanxu Zhou
333
67
0
28 Jul 2020
Is SGD a Bayesian sampler? Well, almost
Chris Mingard
Guillermo Valle Pérez
Joar Skalse
A. Louis
BDL
486
70
0
26 Jun 2020
1
Page 1 of 1