ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.08225
  4. Cited By
Why does deep and cheap learning work so well?
v1v2v3v4 (latest)

Why does deep and cheap learning work so well?

29 August 2016
Henry W. Lin
Max Tegmark
David Rolnick
ArXiv (abs)PDFHTML

Papers citing "Why does deep and cheap learning work so well?"

50 / 203 papers shown
Title
Como funciona o Deep Learning
Como funciona o Deep Learning
M. Ponti
G. B. P. D. Costa
98
14
0
20 Jun 2018
Continuous-variable quantum neural networks
Continuous-variable quantum neural networks
N. Killoran
T. Bromley
J. M. Arrazola
Maria Schuld
N. Quesada
S. Lloyd
GNN
147
402
0
18 Jun 2018
Learning Dynamics of Linear Denoising Autoencoders
Learning Dynamics of Linear Denoising Autoencoders
Arnu Pretorius
Steve Kroon
Herman Kamper
AI4CE
159
28
0
14 Jun 2018
Interpreting Deep Learning: The Machine Learning Rorschach Test?
Interpreting Deep Learning: The Machine Learning Rorschach Test?
Adam S. Charles
AAMLHAIAI4CE
170
9
0
01 Jun 2018
Mean Field Theory of Activation Functions in Deep Neural Networks
Mean Field Theory of Activation Functions in Deep Neural Networks
M. Milletarí
Thiparat Chotibut
P. E. Trevisanutto
93
4
0
22 May 2018
Deep learning generalizes because the parameter-function map is biased
  towards simple functions
Deep learning generalizes because the parameter-function map is biased towards simple functions
Guillermo Valle Pérez
Chico Q. Camargo
A. Louis
MLTAI4CE
382
253
0
22 May 2018
Opening the black box of deep learning
Opening the black box of deep learning
Dian Lei
Xiaoxiao Chen
Jianfei Zhao
AI4CEPINN
136
28
0
22 May 2018
Reconciled Polynomial Machine: A Unified Representation of Shallow and
  Deep Learning Models
Reconciled Polynomial Machine: A Unified Representation of Shallow and Deep Learning Models
Jiawei Zhang
Limeng Cui
Fisher B. Gouza
FAtt
45
0
0
19 May 2018
On Deep Ensemble Learning from a Function Approximation Perspective
On Deep Ensemble Learning from a Function Approximation Perspective
Jiawei Zhang
Limeng Cui
Fisher B. Gouza
FedML
52
0
0
19 May 2018
Doing the impossible: Why neural networks can be trained at all
Doing the impossible: Why neural networks can be trained at all
Nathan Oken Hodas
P. Stinis
AI4CE
154
21
0
13 May 2018
Distribution-Aware Binarization of Neural Networks for Sketch
  Recognition
Distribution-Aware Binarization of Neural Networks for Sketch Recognition
Christian Schroeder de Witt
Vishal Batchu
Sri Aurobindo Munagala
Rohit Gajawada
A. Namboodiri
MQ
185
5
0
09 Apr 2018
Deep Learning of the Nonlinear Schrödinger Equation in Fiber-Optic
  Communications
Deep Learning of the Nonlinear Schrödinger Equation in Fiber-Optic Communications
Christian Hager
H. Pfister
83
49
0
09 Apr 2018
The Loss Surface of XOR Artificial Neural Networks
The Loss Surface of XOR Artificial Neural Networks
D. Mehta
Xiaojun Zhao
Edgar A. Bernal
D. Wales
224
19
0
06 Apr 2018
Understanding Autoencoders with Information Theoretic Concepts
Understanding Autoencoders with Information Theoretic Concepts
Shujian Yu
José C. Príncipe
AI4CE
414
141
0
30 Mar 2018
A high-bias, low-variance introduction to Machine Learning for
  physicists
A high-bias, low-variance introduction to Machine Learning for physicists
Pankaj Mehta
Marin Bukov
Ching-Hao Wang
A. G. Day
C. Richardson
Charles K. Fisher
D. Schwab
AI4CE
323
933
0
23 Mar 2018
Enforcing constraints for interpolation and extrapolation in Generative
  Adversarial Networks
Enforcing constraints for interpolation and extrapolation in Generative Adversarial Networks
P. Stinis
Tobias J. Hagge
A. Tartakovsky
Enoch Yeung
GANAI4CE
152
33
0
22 Mar 2018
Assessing Shape Bias Property of Convolutional Neural Networks
Assessing Shape Bias Property of Convolutional Neural Networks
Hossein Hosseini
Baicen Xiao
Mayoore S. Jaiswal
Radha Poovendran
124
38
0
21 Mar 2018
Generalization and Expressivity for Deep Nets
Generalization and Expressivity for Deep NetsIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2018
Shao-Bo Lin
151
48
0
10 Mar 2018
A computational perspective of the role of Thalamus in cognition
A computational perspective of the role of Thalamus in cognitionNeural Computation (Neural Comput.), 2018
Nima Dehghani
R. D. Wimmer
LRM
133
27
0
02 Mar 2018
Neural Network Renormalization Group
Neural Network Renormalization Group
Shuo-Hui Li
Lei Wang
BDLDRL
211
136
0
08 Feb 2018
Deep Learning Works in Practice. But Does it Work in Theory?
Deep Learning Works in Practice. But Does it Work in Theory?
L. Hoang
R. Guerraoui
PINN
78
3
0
31 Jan 2018
Scale-invariant Feature Extraction of Neural Network and Renormalization
  Group Flow
Scale-invariant Feature Extraction of Neural Network and Renormalization Group Flow
S. Iso
Shotaro Shiba
Sumito Yokoo
OODAI4CE
166
75
0
22 Jan 2018
Information Perspective to Probabilistic Modeling: Boltzmann Machines
  versus Born Machines
Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines
Song Cheng
J. Chen
Lei Wang
209
105
0
12 Dec 2017
Linearly-Recurrent Autoencoder Networks for Learning Dynamics
Linearly-Recurrent Autoencoder Networks for Learning Dynamics
Samuel E. Otto
C. Rowley
AI4CE
191
364
0
04 Dec 2017
Provably efficient neural network representation for image
  classification
Provably efficient neural network representation for image classification
Yichen Huang
126
4
0
13 Nov 2017
Compact Neural Networks based on the Multiscale Entanglement
  Renormalization Ansatz
Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz
A. Hallam
Edward Grant
V. Stojevic
Simone Severini
A. Green
193
9
0
09 Nov 2017
What Really is Deep Learning Doing?
What Really is Deep Learning Doing?
Chuyu Xiong
VLMOOD
55
5
0
06 Nov 2017
An efficient quantum algorithm for generative machine learning
An efficient quantum algorithm for generative machine learning
Xun Gao
Zhengyu Zhang
L. Duan
123
25
0
06 Nov 2017
Approximating Continuous Functions by ReLU Nets of Minimal Width
Approximating Continuous Functions by ReLU Nets of Minimal Width
Boris Hanin
Mark Sellke
193
258
0
31 Oct 2017
How deep learning works --The geometry of deep learning
How deep learning works --The geometry of deep learning
Xiao Dong
Jiasong Wu
Ling Zhou
GNN
159
9
0
30 Oct 2017
Tensor network language model
Tensor network language model
V. Pestun
Yiannis Vlassopoulos
210
36
0
27 Oct 2017
Stability and Generalization of Learning Algorithms that Converge to
  Global Optima
Stability and Generalization of Learning Algorithms that Converge to Global OptimaInternational Conference on Machine Learning (ICML), 2017
Zachary B. Charles
Dimitris Papailiopoulos
MLT
163
175
0
23 Oct 2017
Nonlinear Interference Mitigation via Deep Neural Networks
Nonlinear Interference Mitigation via Deep Neural Networks
Christian Hager
H. Pfister
112
146
0
17 Oct 2017
Calligraphic Stylisation Learning with a Physiologically Plausible Model
  of Movement and Recurrent Neural Networks
Calligraphic Stylisation Learning with a Physiologically Plausible Model of Movement and Recurrent Neural Networks
Daniel Berio
Memo Akten
F. Leymarie
M. Grierson
R. Plamondon
GAN
102
15
0
24 Sep 2017
A Computer Composes A Fabled Problem: Four Knights vs. Queen
A Computer Composes A Fabled Problem: Four Knights vs. Queen
Azlan Iqbal
45
0
0
04 Sep 2017
Tensor Networks for Dimensionality Reduction and Large-Scale
  Optimizations. Part 2 Applications and Future Perspectives
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
A. Cichocki
Anh-Huy Phan
Qibin Zhao
Namgil Lee
Ivan Oseledets
Masashi Sugiyama
Danilo P. Mandic
199
317
0
30 Aug 2017
Universal Function Approximation by Deep Neural Nets with Bounded Width
  and ReLU Activations
Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations
Boris Hanin
314
374
0
09 Aug 2017
Language Design as Information Renormalization
Language Design as Information Renormalization
Ángel J. Gallego
Roman Orus
226
21
0
04 Aug 2017
On the Importance of Consistency in Training Deep Neural Networks
On the Importance of Consistency in Training Deep Neural Networks
Chengxi Ye
Yezhou Yang
Cornelia Fermuller
Yiannis Aloimonos
69
12
0
02 Aug 2017
Do Neural Nets Learn Statistical Laws behind Natural Language?
Do Neural Nets Learn Statistical Laws behind Natural Language?
Shuntaro Takahashi
Kumiko Tanaka-Ishii
171
31
0
16 Jul 2017
A Closer Look at Memorization in Deep Networks
A Closer Look at Memorization in Deep Networks
Devansh Arpit
Stanislaw Jastrzebski
Nicolas Ballas
David M. Krueger
Emmanuel Bengio
...
Tegan Maharaj
Asja Fischer
Aaron Courville
Yoshua Bengio
Damien Scieur
TDI
489
2,018
0
16 Jun 2017
The power of deeper networks for expressing natural functions
The power of deeper networks for expressing natural functions
David Rolnick
Max Tegmark
215
185
0
16 May 2017
Mutual Information, Neural Networks and the Renormalization Group
Mutual Information, Neural Networks and the Renormalization Group
M. Koch-Janusz
Zohar Ringel
DRLAI4CE
178
186
0
20 Apr 2017
Deep Learning and Quantum Entanglement: Fundamental Connections with
  Implications to Network Design
Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design
Yoav Levine
David Yakira
Nadav Cohen
Amnon Shashua
223
131
0
05 Apr 2017
Design of the Artificial: lessons from the biological roots of general
  intelligence
Design of the Artificial: lessons from the biological roots of general intelligence
Nima Dehghani
AI4CE
49
2
0
07 Mar 2017
Bayesian Boolean Matrix Factorisation
Bayesian Boolean Matrix FactorisationInternational Conference on Machine Learning (ICML), 2017
Tammo Rukat
Chris C. Holmes
Michalis K. Titsias
C. Yau
181
34
0
20 Feb 2017
Deep learning and the Schrödinger equation
Deep learning and the Schrödinger equation
Kyle Mills
M. Spanner
Isaac Tamblyn
177
145
0
05 Feb 2017
Equivalence of restricted Boltzmann machines and tensor network states
Equivalence of restricted Boltzmann machines and tensor network states
Martín Arjovsky
Song Cheng
Haidong Xie
Léon Bottou
Tao Xiang
194
234
0
17 Jan 2017
The Upper Bound on Knots in Neural Networks
The Upper Bound on Knots in Neural Networks
Kevin K. Chen
96
15
0
29 Nov 2016
Local minima in training of neural networks
Local minima in training of neural networks
G. Swirszcz
Wojciech M. Czarnecki
Razvan Pascanu
ODL
157
78
0
19 Nov 2016
Previous
12345
Next