
We show that deep sparse ReLU networks with ternary weights and deep ReLU networks with binary weights can approximate -H\"older functions on . Also, continuous functions on can be approximated by networks of depth with binary activation function .
View on arXiv