31
0

Characterising the Inductive Biases of Neural Networks on Boolean Data

Main:8 Pages
19 Figures
Bibliography:6 Pages
2 Tables
Appendix:22 Pages
Abstract

Deep neural networks are renowned for their ability to generalise well across diverse tasks, even when heavily overparameterized. Existing works offer only partial explanations (for example, the NTK-based task-model alignment explanation neglects feature learning). Here, we provide an end-to-end, analytically tractable case study that links a network's inductive prior, its training dynamics including feature learning, and its eventual generalisation. Specifically, we exploit the one-to-one correspondence between depth-2 discrete fully connected networks and disjunctive normal form (DNF) formulas by training on Boolean functions. Under a Monte Carlo learning algorithm, our model exhibits predictable training dynamics and the emergence of interpretable features. This framework allows us to trace, in detail, how inductive bias and feature formation drive generalisation.

View on arXiv
@article{mingard2025_2505.24060,
  title={ Characterising the Inductive Biases of Neural Networks on Boolean Data },
  author={ Chris Mingard and Lukas Seier and Niclas Göring and Andrei-Vlad Badelita and Charles London and Ard Louis },
  journal={arXiv preprint arXiv:2505.24060},
  year={ 2025 }
}
Comments on this paper