ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.11522
  4. Cited By
Neural networks are a priori biased towards Boolean functions with low
  entropy
v1v2v3 (latest)

Neural networks are a priori biased towards Boolean functions with low entropy

25 September 2019
Chris Mingard
Joar Skalse
Guillermo Valle Pérez
David Martínez-Rubio
Vladimir Mikulik
A. Louis
    FAttAI4CE
ArXiv (abs)PDFHTML

Papers citing "Neural networks are a priori biased towards Boolean functions with low entropy"

18 / 18 papers shown
A Modern Look at Simplicity Bias in Image Classification Tasks
A Modern Look at Simplicity Bias in Image Classification Tasks
Xiaoguang Chang
Teng Wang
Changyin Sun
AAML
164
0
0
13 Sep 2025
Characterising the Inductive Biases of Neural Networks on Boolean Data
Characterising the Inductive Biases of Neural Networks on Boolean Data
Chris Mingard
Lukas Seier
Niclas Goring
Andrei-Vlad Badelita
Charles London
Ard A. Louis
AI4CE
311
1
0
29 May 2025
Can Large Reasoning Models Self-Train?
Can Large Reasoning Models Self-Train?
Sheikh Shafayat
Fahim Tajwar
Ruslan Salakhutdinov
J. Schneider
Andrea Zanette
ReLMOffRLLRM
615
37
0
27 May 2025
SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning
SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2024
Hojoon Lee
Dongyoon Hwang
Donghu Kim
Hyunseung Kim
Jun Jet Tai
K. Subramanian
Peter R. Wurman
Jaegul Choo
Peter Stone
Takuma Seno
OffRL
578
62
0
13 Oct 2024
Neural Redshift: Random Networks are not Random Functions
Neural Redshift: Random Networks are not Random Functions
Damien Teney
A. Nicolicioiu
Valentin Hartmann
Ehsan Abbasnejad
758
41
0
04 Mar 2024
Simplicity bias, algorithmic probability, and the random logistic map
Simplicity bias, algorithmic probability, and the random logistic map
B. Hamzi
K. Dingle
306
10
0
31 Dec 2023
Points of non-linearity of functions generated by random neural networks
Points of non-linearity of functions generated by random neural networks
David Holmes
228
0
0
19 Apr 2023
Deep neural networks have an inbuilt Occam's razor
Deep neural networks have an inbuilt Occam's razorNature Communications (Nat. Commun.), 2023
Chris Mingard
Henry Rees
Guillermo Valle Pérez
A. Louis
UQCVBDL
348
16
0
13 Apr 2023
The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of
  Inductive Biases in Machine Learning
The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine LearningInternational Conference on Machine Learning (ICML), 2023
Micah Goldblum
Marc Finzi
K. Rowan
A. Wilson
UQCVFedML
696
73
0
11 Apr 2023
Simplicity Bias in Transformers and their Ability to Learn Sparse
  Boolean Functions
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean FunctionsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
S. Bhattamishra
Arkil Patel
Varun Kanade
Phil Blunsom
535
69
0
22 Nov 2022
A law of adversarial risk, interpolation, and label noise
A law of adversarial risk, interpolation, and label noiseInternational Conference on Learning Representations (ICLR), 2022
Daniel Paleka
Amartya Sanyal
NoLaAAML
418
10
0
08 Jul 2022
Overview frequency principle/spectral bias in deep learning
Overview frequency principle/spectral bias in deep learningCommunication on Applied Mathematics and Computation (CAMC), 2022
Z. Xu
Yaoyu Zhang
Yaoyu Zhang
FaML
526
140
0
19 Jan 2022
Embedding Principle: a hierarchical structure of loss landscape of deep
  neural networks
Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Yaoyu Zhang
Yuqing Li
Zhongwang Zhang
Yaoyu Zhang
Z. Xu
245
26
0
30 Nov 2021
Embedding Principle of Loss Landscape of Deep Neural Networks
Embedding Principle of Loss Landscape of Deep Neural NetworksNeural Information Processing Systems (NeurIPS), 2021
Yaoyu Zhang
Zhongwang Zhang
Yaoyu Zhang
Z. Xu
290
44
0
30 May 2021
Double-descent curves in neural networks: a new perspective using
  Gaussian processes
Double-descent curves in neural networks: a new perspective using Gaussian processesAAAI Conference on Artificial Intelligence (AAAI), 2021
Ouns El Harzli
Bernardo Cuenca Grau
Guillermo Valle Pérez
A. Louis
526
6
0
14 Feb 2021
On the exact computation of linear frequency principle dynamics and its
  generalization
On the exact computation of linear frequency principle dynamics and its generalization
Yaoyu Zhang
Zheng Ma
Z. Xu
Yaoyu Zhang
245
24
0
15 Oct 2020
Deep frequency principle towards understanding why deeper learning is
  faster
Deep frequency principle towards understanding why deeper learning is fasterAAAI Conference on Artificial Intelligence (AAAI), 2020
Zhi-Qin John Xu
Hanxu Zhou
333
67
0
28 Jul 2020
Is SGD a Bayesian sampler? Well, almost
Is SGD a Bayesian sampler? Well, almost
Chris Mingard
Guillermo Valle Pérez
Joar Skalse
A. Louis
BDL
486
70
0
26 Jun 2020
1
Page 1 of 1