Frequency and Generalisation of Periodic Activation Functions in Reinforcement Learning

Periodic activation functions, often referred to as learned Fourier features have been widely demonstrated to improve sample efficiency and stability in a variety of deep RL algorithms. Potentially incompatible hypotheses have been made about the source of these improvements. One is that periodic activations learn low frequency representations and as a result avoid overfitting to bootstrapped targets. Another is that periodic activations learn high frequency representations that are more expressive, allowing networks to quickly fit complex value functions. We analyse these claims empirically, finding that periodic representations consistently converge to high frequencies regardless of their initialisation frequency. We also find that while periodic activation functions improve sample efficiency, they exhibit worse generalization on states with added observation noise -- especially when compared to otherwise equivalent networks with ReLU activation functions. Finally, we show that weight decay regularization is able to partially offset the overfitting of periodic activation functions, delivering value functions that learn quickly while also generalizing.
View on arXiv@article{mavor-parker2025_2407.06756, title={ Frequency and Generalisation of Periodic Activation Functions in Reinforcement Learning }, author={ Augustine N. Mavor-Parker and Matthew J. Sargent and Caswell Barry and Lewis Griffin and Clare Lyle }, journal={arXiv preprint arXiv:2407.06756}, year={ 2025 } }