ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1503.02101
  4. Cited By
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor
  Decomposition

Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition

6 March 2015
Rong Ge
Furong Huang
Chi Jin
Yang Yuan
ArXivPDFHTML

Papers citing "Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition"

12 / 12 papers shown
Title
Dyn-D$^2$P: Dynamic Differentially Private Decentralized Learning with Provable Utility Guarantee
Dyn-D2^22P: Dynamic Differentially Private Decentralized Learning with Provable Utility Guarantee
Zehan Zhu
Yan Huang
Xin Wang
Shouling Ji
Jinming Xu
62
0
0
10 May 2025
Preconditioned Gradient Descent for Over-Parameterized Nonconvex Matrix Factorization
Preconditioned Gradient Descent for Over-Parameterized Nonconvex Matrix Factorization
G. Zhang
Salar Fattahi
Richard Y. Zhang
106
36
0
13 Apr 2025
Symmetry & Critical Points for Symmetric Tensor Decomposition Problems
Symmetry & Critical Points for Symmetric Tensor Decomposition Problems
Yossi Arjevani
Gal Vinograd
52
5
0
13 Jun 2023
Stochastic Compositional Optimization with Compositional Constraints
Stochastic Compositional Optimization with Compositional Constraints
Shuoguang Yang
Wei You
Zhe Zhang
Ethan X. Fang
52
3
0
09 Sep 2022
Preconditioned Gradient Descent for Overparameterized Nonconvex Burer--Monteiro Factorization with Global Optimality Certification
Preconditioned Gradient Descent for Overparameterized Nonconvex Burer--Monteiro Factorization with Global Optimality Certification
G. Zhang
Salar Fattahi
Richard Y. Zhang
108
23
0
07 Jun 2022
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Aaron Mishkin
Arda Sahiner
Mert Pilanci
OffRL
100
30
0
02 Feb 2022
An Analysis of Constant Step Size SGD in the Non-convex Regime:
  Asymptotic Normality and Bias
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias
Lu Yu
Krishnakumar Balasubramanian
S. Volgushev
Murat A. Erdogdu
73
50
0
14 Jun 2020
ISLET: Fast and Optimal Low-rank Tensor Regression via Importance
  Sketching
ISLET: Fast and Optimal Low-rank Tensor Regression via Importance Sketching
Anru R. Zhang
Yuetian Luo
Garvesh Raskutti
M. Yuan
129
44
0
09 Nov 2019
Train longer, generalize better: closing the generalization gap in large
  batch training of neural networks
Train longer, generalize better: closing the generalization gap in large batch training of neural networks
Elad Hoffer
Itay Hubara
Daniel Soudry
ODL
142
799
0
24 May 2017
Exact solutions to the nonlinear dynamics of learning in deep linear
  neural networks
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
Andrew M. Saxe
James L. McClelland
Surya Ganguli
ODL
128
1,830
0
20 Dec 2013
Low-rank Matrix Completion using Alternating Minimization
Low-rank Matrix Completion using Alternating Minimization
Prateek Jain
Praneeth Netrapalli
Sujay Sanghavi
154
1,066
0
03 Dec 2012
Tensor decompositions for learning latent variable models
Tensor decompositions for learning latent variable models
Anima Anandkumar
Rong Ge
Daniel J. Hsu
Sham Kakade
Matus Telgarsky
273
1,142
0
29 Oct 2012
1