ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.08225
  4. Cited By
Why does deep and cheap learning work so well?
v1v2v3v4 (latest)

Why does deep and cheap learning work so well?

29 August 2016
Henry W. Lin
Max Tegmark
David Rolnick
ArXiv (abs)PDFHTML

Papers citing "Why does deep and cheap learning work so well?"

50 / 203 papers shown
Title
Learning Fair Representations with Kolmogorov-Arnold Networks
Learning Fair Representations with Kolmogorov-Arnold Networks
Amisha Priyadarshini
Sergio Gago Masagué
FaML
411
0
0
14 Nov 2025
Comparison of generalised additive models and neural networks in applications: A systematic review
Comparison of generalised additive models and neural networks in applications: A systematic review
Jessica Doohan
Lucas Kook
Kevin Burke
56
0
0
28 Oct 2025
Symmetry and Generalisation in Neural Approximations of Renormalisation Transformations
Symmetry and Generalisation in Neural Approximations of Renormalisation Transformations
Cassidy Ashworth
Pietro Lio
Francesco Caso
AI4CE
80
0
0
18 Oct 2025
Attention to Order: Transformers Discover Phase Transitions via Learnability
Attention to Order: Transformers Discover Phase Transitions via Learnability
Şener Özönder
68
0
0
08 Oct 2025
Multi-Agent Design Assistant for the Simulation of Inertial Fusion Energy
Multi-Agent Design Assistant for the Simulation of Inertial Fusion Energy
Meir H. Shachar
D. Sterbentz
Harshitha Menon
C. Jekel
M. Giselle Fernández-Godino
...
Kevin Korner
Robert Rieben
D. White
William J. Schill
Jonathan L. Belof
AI4CE
135
0
0
02 Oct 2025
Latent Twins
Latent Twins
Matthias Chung
Deepanshu Verma
Max Collins
Amit N. Subrahmanya
Varuni Katti Sastry
Vishwas Rao
SyDaAI4CE
116
0
0
24 Sep 2025
Should We Always Train Models on Fine-Grained Classes?
Should We Always Train Models on Fine-Grained Classes?
Davide Pirovano
Federico Milanesio
Michele Caselle
Piero Fariselli
Matteo Osella
68
0
0
05 Sep 2025
AI LLM Proof of Self-Consciousness and User-Specific Attractors
AI LLM Proof of Self-Consciousness and User-Specific Attractors
Jeffrey Camlin
44
0
0
22 Aug 2025
On the creation of narrow AI: hierarchy and nonlocality of neural network skills
On the creation of narrow AI: hierarchy and nonlocality of neural network skills
Eric J. Michaud
Asher Parker-Sartori
Max Tegmark
366
2
0
21 May 2025
A Mathematical Philosophy of Explanations in Mechanistic Interpretability -- The Strange Science Part I.i
A Mathematical Philosophy of Explanations in Mechanistic Interpretability -- The Strange Science Part I.i
Kola Ayonrinde
Louis Jaburi
MILM
438
3
0
01 May 2025
The Quantum LLM: Modeling Semantic Spaces with Quantum Principles
Timo Aukusti Laine
114
1
0
13 Apr 2025
KAC: Kolmogorov-Arnold Classifier for Continual Learning
KAC: Kolmogorov-Arnold Classifier for Continual LearningComputer Vision and Pattern Recognition (CVPR), 2025
Yusong Hu
Zichen Liang
Fei Yang
Qibin Hou
Xialei Liu
Ming-Ming Cheng
CLL
275
5
0
27 Mar 2025
Multilevel Generative Samplers for Investigating Critical PhenomenaInternational Conference on Learning Representations (ICLR), 2025
Ankur Singha
E. Cellini
K. Nicoli
K. Jansen
Stefan Kühn
Shinichi Nakajima
285
4
0
11 Mar 2025
Aligning Generalisation Between Humans and Machines
Aligning Generalisation Between Humans and Machines
Filip Ilievski
Barbara Hammer
F. V. Harmelen
Benjamin Paassen
S. Saralajew
...
Vered Shwartz
Gabriella Skitalinskaya
Clemens Stachl
Gido M. van de Ven
T. Villmann
645
2
0
23 Nov 2024
Dynamic neuron approach to deep neural networks: Decoupling neurons for renormalization group analysis
Dynamic neuron approach to deep neural networks: Decoupling neurons for renormalization group analysis
Donghee Lee
Hye-Sung Lee
Jaeok Yi
353
3
0
01 Oct 2024
MS$^3$D: A RG Flow-Based Regularization for GAN Training with Limited
  Data
MS3^33D: A RG Flow-Based Regularization for GAN Training with Limited DataInternational Conference on Machine Learning (ICML), 2024
Jian Wang
Xin Lan
Yuxin Tian
Jiancheng Lv
AI4CE
152
2
0
20 Aug 2024
KAN: Kolmogorov-Arnold Networks
KAN: Kolmogorov-Arnold Networks
Ziming Liu
Yixuan Wang
Sachin Vaidya
Fabian Ruehle
James Halverson
Marin Soljacic
Thomas Y. Hou
Max Tegmark
868
1,133
0
30 Apr 2024
Extracting Formulae in Many-Valued Logic from Deep Neural Networks
Extracting Formulae in Many-Valued Logic from Deep Neural NetworksIEEE Transactions on Signal Processing (IEEE TSP), 2024
Yani Zhang
Helmut Bölcskei
124
0
0
22 Jan 2024
Deep Neural Networks for Automatic Speaker Recognition Do Not Learn
  Supra-Segmental Temporal Features
Deep Neural Networks for Automatic Speaker Recognition Do Not Learn Supra-Segmental Temporal FeaturesPattern Recognition Letters (PR), 2023
Daniel Neururer
Volker Dellwo
Thilo Stadelmann
214
3
0
01 Nov 2023
The Evolution of the Interplay Between Input Distributions and Linear
  Regions in Networks
The Evolution of the Interplay Between Input Distributions and Linear Regions in Networks
Xuan Qi
Yi Wei
161
0
0
28 Oct 2023
A Hyperparameter Study for Quantum Kernel Methods
A Hyperparameter Study for Quantum Kernel MethodsQuantum Machine Intelligence (QMI), 2023
Sebastian Egginger
Alona Sakhnenko
J. M. Lorenz
202
13
0
18 Oct 2023
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
T. Getu
Georges Kaddoum
M. Bennis
254
1
0
13 Sep 2023
Scale-Preserving Automatic Concept Extraction (SPACE)
Scale-Preserving Automatic Concept Extraction (SPACE)Machine-mediated learning (ML), 2023
Andres Felipe Posada-Moreno
Lukas Kreisköther
T. Glander
Sebastian Trimpe
93
1
0
11 Aug 2023
Iterative Magnitude Pruning as a Renormalisation Group: A Study in The
  Context of The Lottery Ticket Hypothesis
Iterative Magnitude Pruning as a Renormalisation Group: A Study in The Context of The Lottery Ticket Hypothesis
Abu-Al Hassan
128
0
0
06 Aug 2023
Deep Convolutional Neural Networks with Zero-Padding: Feature Extraction
  and Learning
Deep Convolutional Neural Networks with Zero-Padding: Feature Extraction and Learning
Zhixiong Han
Baichen Liu
Shao-Bo Lin
Ding-Xuan Zhou
133
6
0
30 Jul 2023
Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding
  for Ising MRF Models: Classical and Quantum Topology Machine Learning
Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning
V. Usatyuk
Sergey Egorov
Denis Sapozhnikov
277
3
0
28 Jul 2023
Deep neural networks have an inbuilt Occam's razor
Deep neural networks have an inbuilt Occam's razorNature Communications (Nat. Commun.), 2023
Chris Mingard
Henry Rees
Guillermo Valle Pérez
A. Louis
UQCVBDL
240
28
0
13 Apr 2023
From Wide to Deep: Dimension Lifting Network for Parameter-efficient
  Knowledge Graph Embedding
From Wide to Deep: Dimension Lifting Network for Parameter-efficient Knowledge Graph EmbeddingIEEE Transactions on Knowledge and Data Engineering (TKDE), 2023
Borui Cai
Yong Xiang
Longxiang Gao
Di Wu
Heng Zhang
Jiongdao Jin
Tom H. Luan
206
3
0
22 Mar 2023
Expressivity of Shallow and Deep Neural Networks for Polynomial
  Approximation
Expressivity of Shallow and Deep Neural Networks for Polynomial Approximation
Itai Shapira
92
0
0
06 Mar 2023
MOSAIC, acomparison framework for machine learning models
MOSAIC, acomparison framework for machine learning models
Mattéo Papin
Yann Beaujeault-Taudiere
F. Magniette
VLM
63
0
0
30 Jan 2023
A prediction and behavioural analysis of machine learning methods for
  modelling travel mode choice
A prediction and behavioural analysis of machine learning methods for modelling travel mode choiceTransportation Research Part C: Emerging Technologies (TRC), 2023
José Ángel Martín-Baos
Julio Alberto López-Gómez
Luis Rodriguez-Benitez
T. Hillel
Ricardo García-Ródenas
180
30
0
11 Jan 2023
Inference on Time Series Nonparametric Conditional Moment Restrictions
  Using General Sieves
Inference on Time Series Nonparametric Conditional Moment Restrictions Using General Sieves
Xiaohong Chen
Yuan Liao
Weichen Wang
142
0
0
31 Dec 2022
Renormalization in the neural network-quantum field theory
  correspondence
Renormalization in the neural network-quantum field theory correspondence
Harold Erbin
Vincent Lahoche
D. O. Samary
217
8
0
22 Dec 2022
Changes from Classical Statistics to Modern Statistics and Data Science
Changes from Classical Statistics to Modern Statistics and Data Science
Kai Zhang
Shan-Yu Liu
M. Xiong
231
1
0
30 Oct 2022
Hierarchical quantum circuit representations for neural architecture
  search
Hierarchical quantum circuit representations for neural architecture searchnpj Quantum Information (NQI), 2022
Matt Lourens
I. Sinayskiy
D. Park
Carsten Blank
Francesco Petruccione
252
15
0
26 Oct 2022
Deep Neural Networks as the Semi-classical Limit of Topological Quantum
  Neural Networks: The problem of generalisation
Deep Neural Networks as the Semi-classical Limit of Topological Quantum Neural Networks: The problem of generalisation
A. Marcianò
De-Wei Chen
Filippo Fabrocini
C. Fields
M. Lulli
Emanuele Zappala
GNN
102
5
0
25 Oct 2022
Precision Machine Learning
Precision Machine Learning
Eric J. Michaud
Ziming Liu
Max Tegmark
133
40
0
24 Oct 2022
When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work
When Expressivity Meets Trainability: Fewer than nnn Neurons Can WorkNeural Information Processing Systems (NeurIPS), 2022
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Tian Ding
Jianfeng Yao
271
10
0
21 Oct 2022
Why neural networks find simple solutions: the many regularizers of
  geometric complexity
Why neural networks find simple solutions: the many regularizers of geometric complexityNeural Information Processing Systems (NeurIPS), 2022
Benoit Dherin
Michael Munn
M. Rosca
David Barrett
276
41
0
27 Sep 2022
Three Learning Stages and Accuracy-Efficiency Tradeoff of Restricted
  Boltzmann Machines
Three Learning Stages and Accuracy-Efficiency Tradeoff of Restricted Boltzmann MachinesNature Communications (Nat Commun), 2022
Lennart Dabelow
Masahito Ueda
182
11
0
02 Sep 2022
Gaussian Process Surrogate Models for Neural Networks
Gaussian Process Surrogate Models for Neural NetworksConference on Uncertainty in Artificial Intelligence (UAI), 2022
Michael Y. Li
Erin Grant
Thomas Griffiths
BDLSyDa
224
9
0
11 Aug 2022
Image sensing with multilayer, nonlinear optical neural networks
Image sensing with multilayer, nonlinear optical neural networksNature Photonics (Nat. Photonics), 2022
Tianyu Wang
Mandar M. Sohoni
Logan G. Wright
Martin M. Stein
Shifan Ma
Tatsuhiro Onodera
Maxwell G. Anderson
Peter L. McMahon
127
212
0
27 Jul 2022
Wavelet Conditional Renormalization Group
Wavelet Conditional Renormalization GroupPhysical Review X (PRX), 2022
Tanguy Marchand
M. Ozawa
Giulio Biroli
S. Mallat
106
21
0
11 Jul 2022
Advanced Transient Diagnostic with Ensemble Digital Twin Modeling
Advanced Transient Diagnostic with Ensemble Digital Twin Modeling
Edward Chen
Linyu Lin
Truc-Nam Dinh
54
4
0
23 May 2022
Towards understanding deep learning with the natural clustering prior
Towards understanding deep learning with the natural clustering prior
Simon Carbonnelle
144
0
0
15 Mar 2022
Categorical Representation Learning and RG flow operators for
  algorithmic classifiers
Categorical Representation Learning and RG flow operators for algorithmic classifiers
A. Sheshmani
Yi-Zhuang You
Wenbo Fu
A. Azizi
AI4CE
57
4
0
15 Mar 2022
Identifying equivalent Calabi--Yau topologies: A discrete challenge from
  math and physics for machine learning
Identifying equivalent Calabi--Yau topologies: A discrete challenge from math and physics for machine learning
Vishnu Jejjala
W. Taylor
Andrew P. Turner
241
8
0
15 Feb 2022
Complexity from Adaptive-Symmetries Breaking: Global Minima in the
  Statistical Mechanics of Deep Neural Networks
Complexity from Adaptive-Symmetries Breaking: Global Minima in the Statistical Mechanics of Deep Neural Networks
Shaun Li
AI4CE
186
1
0
03 Jan 2022
Explicitly antisymmetrized neural network layers for variational Monte
  Carlo simulation
Explicitly antisymmetrized neural network layers for variational Monte Carlo simulation
Jeffmin Lin
Gil Goldshlager
Lin Lin
181
29
0
07 Dec 2021
Error Bounds for a Matrix-Vector Product Approximation with Deep ReLU
  Neural Networks
Error Bounds for a Matrix-Vector Product Approximation with Deep ReLU Neural Networks
T. Getu
169
2
0
25 Nov 2021
12345
Next