Expressivity of Neural Networks with Random Weights and Learned Biases

Landmark universal function approximation results for neural networks with trained weights and biases provided the impetus for the ubiquitous use of neural networks as learning models in neuroscience and Artificial Intelligence (AI). Recent work has extended these results to networks in which a smaller subset of weights (e.g., output weights) are tuned, leaving other parameters random. However, it remains an open question whether universal approximation holds when only biases are learned, despite evidence from neuroscience and AI that biases significantly shape neural responses. The current paper answers this question. We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can approximate any continuous function on compact sets. We further show an analogous result for the approximation of dynamical systems with recurrent neural networks. Our findings are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights, as well as for AI, where they shed light on recent fine-tuning methods for large language models, like bias and prefix-based approaches.
View on arXiv@article{williams2025_2407.00957, title={ Expressivity of Neural Networks with Random Weights and Learned Biases }, author={ Ezekiel Williams and Alexandre Payeur and Avery Hee-Woon Ryoo and Thomas Jiralerspong and Matthew G. Perich and Luca Mazzucato and Guillaume Lajoie }, journal={arXiv preprint arXiv:2407.00957}, year={ 2025 } }