
Title |
|---|
![]() How Neural Networks Extrapolate: From Feedforward to Graph Neural
NetworksInternational Conference on Learning Representations (ICLR), 2020 |
![]() Theoretical Analysis of the Advantage of Deepening Neural NetworksInternational Conference on Machine Learning and Applications (ICMLA), 2020 |
![]() How benign is benign overfitting?International Conference on Learning Representations (ICLR), 2020 |
![]() On the Number of Linear Regions of Convolutional Neural NetworksInternational Conference on Machine Learning (ICML), 2020 |
![]() Provably Good Solutions to the Knapsack Problem via Neural Networks of
Bounded SizeAAAI Conference on Artificial Intelligence (AAAI), 2020 |
![]() Piecewise linear activations substantially shape the loss surfaces of
neural networksInternational Conference on Learning Representations (ICLR), 2020 |
![]() Uncertainty Quantification for Sparse Deep LearningInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2020 |
![]() Investigating the Compositional Structure Of Deep Neural NetworksInternational Conference on Machine Learning, Optimization, and Data Science (MOD), 2020 |
![]() Self-explaining AI as an alternative to interpretable AIArtificial General Intelligence (AGI), 2020 |
![]() Empirical Studies on the Properties of Linear Regions in Deep Neural
NetworksInternational Conference on Learning Representations (ICLR), 2020 |
![]() Lossless Compression of Deep Neural NetworksIntegration of AI and OR Techniques in Constraint Programming (CPAIOR), 2020 |
![]() Trajectory growth lower bounds for random sparse deep ReLU networksInternational Conference on Machine Learning and Applications (ICMLA), 2019 |
![]() The Local Elasticity of Neural NetworksInternational Conference on Learning Representations (ICLR), 2019 Hangfeng He Weijie J. Su |
![]() Reverse-Engineering Deep ReLU NetworksInternational Conference on Machine Learning (ICML), 2019 |
![]() Computing Linear Restrictions of Neural NetworksNeural Information Processing Systems (NeurIPS), 2019 |
![]() Gradient Dynamics of Shallow Univariate ReLU NetworksNeural Information Processing Systems (NeurIPS), 2019 |
![]() Deep ReLU Networks Have Surprisingly Few Activation PatternsNeural Information Processing Systems (NeurIPS), 2019 |
![]() Expression of Fractals Through Neural Network FunctionsIEEE Journal on Selected Areas in Information Theory (JSAIT), 2019 |
![]() The Geometry of Deep Networks: Power Diagram SubdivisionNeural Information Processing Systems (NeurIPS), 2019 |