How degenerate is the parametrization of neural networks with the ReLU
activation function?Neural Information Processing Systems (NeurIPS), 2019 |
On the minimax optimality and superiority of deep neural network
learning over sparse parameter spacesNeural Networks (NN), 2019 |
Nonlinear Approximation and (Deep) ReLU NetworksConstructive approximation (Constr. Approx.), 2019 |
Approximation spaces of deep neural networksConstructive approximation (Constr. Approx.), 2019 |
Theoretical guarantees for sampling and inference in generative models
with latent diffusionsAnnual Conference Computational Learning Theory (COLT), 2019 |
Representation Learning with Weighted Inner Product for Universal
Approximation of General SimilaritiesInternational Joint Conference on Artificial Intelligence (IJCAI), 2019 |
Nonlinear Approximation via CompositionsNeural Networks (NN), 2019 |
How Well Generative Adversarial Networks Learn DistributionsJournal of machine learning research (JMLR), 2018 |