Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class

In this paper, we construct neural networks with ReLU, sine and as activation functions. For general continuous defined on with continuity modulus , we construct ReLU-sine- networks that enjoy an approximation rate , where denote the hyperparameters related to widths of the networks. As a consequence, we can construct ReLU-sine- network with the depth and width that approximates within a given tolerance measured in norm , where denotes the H\"older continuous function class defined on with order and constant . Therefore, the ReLU-sine- networks overcome the curse of dimensionality on . In addition to its supper expressive power, functions implemented by ReLU-sine- networks are (generalized) differentiable, enabling us to apply SGD to train.
View on arXiv