ULU: A Unified Activation Function

Main:12 Pages
7 Figures
Bibliography:3 Pages
4 Tables
Abstract
We propose \textbf{ULU}, a novel non-monotonic, piecewise activation function defined as , where . ULU treats positive and negative inputs differently. Extensive experiments demonstrate ULU significantly outperforms ReLU and Mish across image classification and object detection tasks. Its variant Adaptive ULU (\textbf{AULU}) is expressed as , where and are learnable parameters, enabling it to adapt its response separately for positive and negative inputs. Additionally, we introduce the LIB (Like Inductive Bias) metric from AULU to quantitatively measure the inductive bias of the model.
View on arXivComments on this paper
