174

Expressive power of binary and ternary neural networks

Abstract

We show that deep sparse ReLU networks with ternary weights and deep ReLU networks with binary weights can approximate β\beta-H\"older functions on [0,1]d[0,1]^d. Also, H\"older continuous functions on [0,1]d[0,1]^d can be approximated by networks of depth 22 with binary activation function \mathds1[0,1)\mathds{1}_{[0,1)}.

View on arXiv
Comments on this paper