ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1410.1141
  4. Cited By
On the Computational Efficiency of Training Neural Networks

On the Computational Efficiency of Training Neural Networks

5 October 2014
Roi Livni
Shai Shalev-Shwartz
Ohad Shamir
ArXivPDFHTML

Papers citing "On the Computational Efficiency of Training Neural Networks"

11 / 11 papers shown
Title
Training Large Neural Networks With Low-Dimensional Error Feedback
Training Large Neural Networks With Low-Dimensional Error Feedback
Maher Hanut
Jonathan Kadmon
78
1
0
27 Feb 2025
Early Directional Convergence in Deep Homogeneous Neural Networks for Small Initializations
Early Directional Convergence in Deep Homogeneous Neural Networks for Small Initializations
Akshay Kumar
Jarvis Haupt
ODL
61
3
0
12 Mar 2024
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Zhengqing Wu
Berfin Simsek
Francois Ged
ODL
72
0
0
08 Feb 2024
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
T. Getu
Georges Kaddoum
M. Bennis
62
1
0
13 Sep 2023
Nonparametric Learning of Two-Layer ReLU Residual Units
Nonparametric Learning of Two-Layer ReLU Residual Units
Zhunxuan Wang
Linyun He
Chunchuan Lyu
Shay B. Cohen
MLT
OffRL
110
1
0
17 Aug 2020
Large-time asymptotics in deep learning
Large-time asymptotics in deep learning
Carlos Esteve
Borjan Geshkovski
Dario Pighin
Enrique Zuazua
80
34
0
06 Aug 2020
Deep Semi-Random Features for Nonlinear Function Approximation
Deep Semi-Random Features for Nonlinear Function Approximation
Kenji Kawaguchi
Bo Xie
Vikas Verma
Le Song
110
15
0
28 Feb 2017
Exponentially vanishing sub-optimal local minima in multilayer neural
  networks
Exponentially vanishing sub-optimal local minima in multilayer neural networks
Daniel Soudry
Elad Hoffer
113
97
0
19 Feb 2017
From average case complexity to improper learning complexity
From average case complexity to improper learning complexity
Amit Daniely
N. Linial
Shai Shalev-Shwartz
81
120
0
10 Nov 2013
Building high-level features using large scale unsupervised learning
Building high-level features using large scale unsupervised learning
Quoc V. Le
MarcÁurelio Ranzato
R. Monga
M. Devin
Kai Chen
G. Corrado
J. Dean
A. Ng
SSL
OffRL
CVBM
101
2,268
0
29 Dec 2011
Large-Scale Convex Minimization with a Low-Rank Constraint
Large-Scale Convex Minimization with a Low-Rank Constraint
Shai Shalev-Shwartz
Alon Gonen
Ohad Shamir
78
160
0
08 Jun 2011
1