ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.08225
  4. Cited By
Why does deep and cheap learning work so well?
v1v2v3v4 (latest)

Why does deep and cheap learning work so well?

29 August 2016
Henry W. Lin
Max Tegmark
David Rolnick
ArXiv (abs)PDFHTML

Papers citing "Why does deep and cheap learning work so well?"

36 / 236 papers shown
Neural Network Renormalization Group
Neural Network Renormalization Group
Shuo-Hui Li
Lei Wang
BDLDRL
265
137
0
08 Feb 2018
Deep Learning Works in Practice. But Does it Work in Theory?
Deep Learning Works in Practice. But Does it Work in Theory?
L. Hoang
R. Guerraoui
PINN
82
3
0
31 Jan 2018
Scale-invariant Feature Extraction of Neural Network and Renormalization
  Group Flow
Scale-invariant Feature Extraction of Neural Network and Renormalization Group Flow
S. Iso
Shotaro Shiba
Sumito Yokoo
OODAI4CE
202
75
0
22 Jan 2018
Information Perspective to Probabilistic Modeling: Boltzmann Machines
  versus Born Machines
Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines
Song Cheng
J. Chen
Lei Wang
245
105
0
12 Dec 2017
Linearly-Recurrent Autoencoder Networks for Learning Dynamics
Linearly-Recurrent Autoencoder Networks for Learning Dynamics
Samuel E. Otto
C. Rowley
AI4CE
216
370
0
04 Dec 2017
In folly ripe. In reason rotten. Putting machine theology to rest
In folly ripe. In reason rotten. Putting machine theology to rest
M. Nadin
AI4CE
69
1
0
03 Dec 2017
Provably efficient neural network representation for image
  classification
Provably efficient neural network representation for image classification
Yichen Huang
181
4
0
13 Nov 2017
Compact Neural Networks based on the Multiscale Entanglement
  Renormalization Ansatz
Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz
A. Hallam
Edward Grant
V. Stojevic
Simone Severini
A. Green
193
9
0
09 Nov 2017
What Really is Deep Learning Doing?
What Really is Deep Learning Doing?
Chuyu Xiong
VLMOOD
55
5
0
06 Nov 2017
An efficient quantum algorithm for generative machine learning
An efficient quantum algorithm for generative machine learning
Xun Gao
Zhengyu Zhang
L. Duan
123
26
0
06 Nov 2017
Approximating Continuous Functions by ReLU Nets of Minimal Width
Approximating Continuous Functions by ReLU Nets of Minimal Width
Boris Hanin
Mark Sellke
240
264
0
31 Oct 2017
How deep learning works --The geometry of deep learning
How deep learning works --The geometry of deep learning
Xiao Dong
Jiasong Wu
Ling Zhou
GNN
180
9
0
30 Oct 2017
Tensor network language model
Tensor network language model
V. Pestun
Yiannis Vlassopoulos
242
36
0
27 Oct 2017
Stability and Generalization of Learning Algorithms that Converge to
  Global Optima
Stability and Generalization of Learning Algorithms that Converge to Global OptimaInternational Conference on Machine Learning (ICML), 2017
Zachary B. Charles
Dimitris Papailiopoulos
MLT
183
178
0
23 Oct 2017
Nonlinear Interference Mitigation via Deep Neural Networks
Nonlinear Interference Mitigation via Deep Neural Networks
Christian Hager
H. Pfister
113
146
0
17 Oct 2017
Calligraphic Stylisation Learning with a Physiologically Plausible Model
  of Movement and Recurrent Neural Networks
Calligraphic Stylisation Learning with a Physiologically Plausible Model of Movement and Recurrent Neural Networks
Daniel Berio
Memo Akten
F. Leymarie
M. Grierson
R. Plamondon
GAN
142
15
0
24 Sep 2017
A Computer Composes A Fabled Problem: Four Knights vs. Queen
A Computer Composes A Fabled Problem: Four Knights vs. Queen
Azlan Iqbal
61
0
0
04 Sep 2017
Tensor Networks for Dimensionality Reduction and Large-Scale
  Optimizations. Part 2 Applications and Future Perspectives
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
A. Cichocki
Anh-Huy Phan
Qibin Zhao
Namgil Lee
Ivan Oseledets
Masashi Sugiyama
Danilo P. Mandic
251
322
0
30 Aug 2017
Universal Function Approximation by Deep Neural Nets with Bounded Width
  and ReLU Activations
Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations
Boris Hanin
375
378
0
09 Aug 2017
Language Design as Information Renormalization
Language Design as Information Renormalization
Ángel J. Gallego
Roman Orus
242
21
0
04 Aug 2017
On the Importance of Consistency in Training Deep Neural Networks
On the Importance of Consistency in Training Deep Neural Networks
Chengxi Ye
Yezhou Yang
Cornelia Fermuller
Yiannis Aloimonos
77
12
0
02 Aug 2017
Do Neural Nets Learn Statistical Laws behind Natural Language?
Do Neural Nets Learn Statistical Laws behind Natural Language?
Shuntaro Takahashi
Kumiko Tanaka-Ishii
202
33
0
16 Jul 2017
A Closer Look at Memorization in Deep Networks
A Closer Look at Memorization in Deep Networks
Devansh Arpit
Stanislaw Jastrzebski
Nicolas Ballas
David M. Krueger
Emmanuel Bengio
...
Tegan Maharaj
Asja Fischer
Aaron Courville
Yoshua Bengio
Damien Scieur
TDI
581
2,048
0
16 Jun 2017
The power of deeper networks for expressing natural functions
The power of deeper networks for expressing natural functions
David Rolnick
Max Tegmark
256
186
0
16 May 2017
Mutual Information, Neural Networks and the Renormalization Group
Mutual Information, Neural Networks and the Renormalization Group
M. Koch-Janusz
Zohar Ringel
DRLAI4CE
191
187
0
20 Apr 2017
Deep Learning and Quantum Entanglement: Fundamental Connections with
  Implications to Network Design
Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design
Yoav Levine
David Yakira
Nadav Cohen
Amnon Shashua
296
131
0
05 Apr 2017
On Generalization and Regularization in Deep Learning
On Generalization and Regularization in Deep Learning
Pirmin Lemberger
ODLAI4CE
83
10
0
05 Apr 2017
Design of the Artificial: lessons from the biological roots of general
  intelligence
Design of the Artificial: lessons from the biological roots of general intelligence
Nima Dehghani
AI4CE
65
2
0
07 Mar 2017
Bayesian Boolean Matrix Factorisation
Bayesian Boolean Matrix FactorisationInternational Conference on Machine Learning (ICML), 2017
Tammo Rukat
Chris C. Holmes
Michalis K. Titsias
C. Yau
241
34
0
20 Feb 2017
Deep learning and the Schrödinger equation
Deep learning and the Schrödinger equation
Kyle Mills
M. Spanner
Isaac Tamblyn
224
145
0
05 Feb 2017
Equivalence of restricted Boltzmann machines and tensor network states
Equivalence of restricted Boltzmann machines and tensor network states
Martín Arjovsky
Song Cheng
Haidong Xie
Léon Bottou
Tao Xiang
258
235
0
17 Jan 2017
The Upper Bound on Knots in Neural Networks
The Upper Bound on Knots in Neural Networks
Kevin K. Chen
120
15
0
29 Nov 2016
Local minima in training of neural networks
Local minima in training of neural networks
G. Swirszcz
Wojciech M. Czarnecki
Razvan Pascanu
ODL
193
78
0
19 Nov 2016
Deep Convolutional Neural Network for Inverse Problems in Imaging
Deep Convolutional Neural Network for Inverse Problems in Imaging
Kyong Hwan Jin
Michael T. McCann
Emmanuel Froustey
M. Unser
219
2,292
0
11 Nov 2016
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of
  Dimensionality: a Review
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review
T. Poggio
H. Mhaskar
Lorenzo Rosasco
Alycia Lee
Q. Liao
607
618
0
02 Nov 2016
Exact maximum-entropy estimation with Feynman diagrams
Exact maximum-entropy estimation with Feynman diagrams
T. Schlank
Ran J. Tessler
Amitai Netser Zernik
86
0
0
01 Dec 2015
Previous
12345
Page 5 of 5