Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1511.05653
Cited By
Why are deep nets reversible: A simple theory, with implications for training
18 November 2015
Sanjeev Arora
Yingyu Liang
Tengyu Ma
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Why are deep nets reversible: A simple theory, with implications for training"
9 / 9 papers shown
Title
Robust One-Bit Recovery via ReLU Generative Networks: Near-Optimal Statistical Rate and Global Landscape Analysis
Shuang Qiu
Xiaohan Wei
Zhuoran Yang
35
24
0
14 Aug 2019
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
41
191
0
02 Oct 2018
Rate-Optimal Denoising with Deep Neural Networks
Reinhard Heckel
Wen Huang
Paul Hand
V. Voroninski
27
23
0
22 May 2018
Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net
Anirudh Goyal
Nan Rosemary Ke
Surya Ganguli
Yoshua Bengio
DiffM
35
55
0
07 Nov 2017
Reversible Architectures for Arbitrarily Deep Residual Neural Networks
B. Chang
Lili Meng
E. Haber
Lars Ruthotto
David Begert
E. Holtham
AI4CE
30
261
0
12 Sep 2017
Towards Understanding the Invertibility of Convolutional Neural Networks
A. Gilbert
Yi Zhang
Kibok Lee
Y. Zhang
Honglak Lee
11
64
0
24 May 2017
AdaNet: Adaptive Structural Learning of Artificial Neural Networks
Corinna Cortes
X. Gonzalvo
Vitaly Kuznetsov
M. Mohri
Scott Yang
29
282
0
05 Jul 2016
Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation
B. Scellier
Yoshua Bengio
17
481
0
16 Feb 2016
On the interplay of network structure and gradient convergence in deep learning
V. Ithapu
Sathya Ravi
Vikas Singh
23
3
0
17 Nov 2015
1