ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.07301
  4. Cited By
On the Connection Between Learning Two-Layers Neural Networks and Tensor
  Decomposition
v1v2v3 (latest)

On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition

20 February 2018
Marco Mondelli
Andrea Montanari
    MLTCML
ArXiv (abs)PDFHTML

Papers citing "On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition"

37 / 37 papers shown
Title
Identifiability of Deep Polynomial Neural Networks
Identifiability of Deep Polynomial Neural Networks
K. Usevich
Clara Dérand
Ricardo Augusto Borsoi
Marianne Clausel
19
0
0
20 Jun 2025
Learning Gaussian Multi-Index Models with Gradient Flow: Time Complexity and Directional Convergence
Learning Gaussian Multi-Index Models with Gradient Flow: Time Complexity and Directional Convergence
Berfin Simsek
Amire Bendjeddou
Daniel Hsu
183
3
0
13 Nov 2024
Sparse MTTKRP Acceleration for Tensor Decomposition on GPU
Sparse MTTKRP Acceleration for Tensor Decomposition on GPU
Sasindu Wijeratne
Rajgopal Kannan
Viktor Prasanna
40
1
0
14 May 2024
Dynasor: A Dynamic Memory Layout for Accelerating Sparse MTTKRP for
  Tensor Decomposition on Multi-core CPU
Dynasor: A Dynamic Memory Layout for Accelerating Sparse MTTKRP for Tensor Decomposition on Multi-core CPU
Sasindu Wijeratne
Rajgopal Kannan
Viktor Prasanna
28
4
0
17 Sep 2023
Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO
  Regularization
Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO Regularization
Geng Li
G. Wang
Jie Ding
51
3
0
07 May 2023
Expand-and-Cluster: Parameter Recovery of Neural Networks
Expand-and-Cluster: Parameter Recovery of Neural Networks
Flavio Martinelli
Berfin Simsek
W. Gerstner
Johanni Brea
146
8
0
25 Apr 2023
Tensor Networks Meet Neural Networks: A Survey and Future Perspectives
Tensor Networks Meet Neural Networks: A Survey and Future Perspectives
Maolin Wang
Yu Pan
Zenglin Xu
Xiangli Yang
Guangxi Li
A. Cichocki
Andrzej Cichocki
202
22
0
22 Jan 2023
Finite Sample Identification of Wide Shallow Neural Networks with Biases
Finite Sample Identification of Wide Shallow Neural Networks with Biases
M. Fornasier
T. Klock
Marco Mondelli
Michael Rauchensteiner
52
6
0
08 Nov 2022
Lower Bounds for the Convergence of Tensor Power Iteration on Random Overcomplete Models
Lower Bounds for the Convergence of Tensor Power Iteration on Random Overcomplete Models
Yuchen Wu
Kangjie Zhou
173
6
0
07 Nov 2022
Performance Modeling Sparse MTTKRP Using Optical Static Random Access
  Memory on FPGA
Performance Modeling Sparse MTTKRP Using Optical Static Random Access Memory on FPGA
Sasindu Wijeratne
Akhilesh R. Jaiswal
Ajey P. Jacob
Bingyi Zhang
Viktor Prasanna
40
2
0
22 Aug 2022
Towards Programmable Memory Controller for Tensor Decomposition
Towards Programmable Memory Controller for Tensor Decomposition
Sasindu Wijeratne
Ta-Yang Wang
Rajgopal Kannan
Viktor Prasanna
22
2
0
17 Jul 2022
Sharp asymptotics on the compression of two-layer neural networks
Sharp asymptotics on the compression of two-layer neural networks
Mohammad Hossein Amani
Simone Bombari
Marco Mondelli
Rattana Pukdee
Stefano Rini
MLT
74
0
0
17 May 2022
A Robust Spectral Algorithm for Overcomplete Tensor Decomposition
A Robust Spectral Algorithm for Overcomplete Tensor Decomposition
Samuel B. Hopkins
T. Schramm
Jonathan Shi
95
23
0
05 Mar 2022
Reconfigurable Low-latency Memory System for Sparse Matricized Tensor
  Times Khatri-Rao Product on FPGA
Reconfigurable Low-latency Memory System for Sparse Matricized Tensor Times Khatri-Rao Product on FPGA
Sasindu Wijeratne
Rajgopal Kannan
Viktor Prasanna
60
6
0
18 Sep 2021
The Rate of Convergence of Variation-Constrained Deep Neural Networks
The Rate of Convergence of Variation-Constrained Deep Neural Networks
Gen Li
Jie Ding
54
6
0
22 Jun 2021
Achieving Small Test Error in Mildly Overparameterized Neural Networks
Achieving Small Test Error in Mildly Overparameterized Neural Networks
Shiyu Liang
Ruoyu Sun
R. Srikant
53
3
0
24 Apr 2021
Stable Recovery of Entangled Weights: Towards Robust Identification of
  Deep Neural Networks from Minimal Samples
Stable Recovery of Entangled Weights: Towards Robust Identification of Deep Neural Networks from Minimal Samples
Christian Fiedler
M. Fornasier
T. Klock
Michael Rauchensteiner
OOD
45
13
0
18 Jan 2021
Optimal High-order Tensor SVD via Tensor-Train Orthogonal Iteration
Optimal High-order Tensor SVD via Tensor-Train Orthogonal Iteration
Yuchen Zhou
Anru R. Zhang
Lili Zheng
Yazhen Wang
105
22
0
06 Oct 2020
The Efficacy of $L_1$ Regularization in Two-Layer Neural Networks
The Efficacy of L1L_1L1​ Regularization in Two-Layer Neural Networks
Gen Li
Yuantao Gu
Jie Ding
54
7
0
02 Oct 2020
From Symmetry to Geometry: Tractable Nonconvex Problems
From Symmetry to Geometry: Tractable Nonconvex Problems
Yuqian Zhang
Qing Qu
John N. Wright
87
45
0
14 Jul 2020
Reducibility and Statistical-Computational Gaps from Secret Leakage
Reducibility and Statistical-Computational Gaps from Secret Leakage
Matthew Brennan
Guy Bresler
99
91
0
16 May 2020
Estimating multi-index models with response-conditional least squares
Estimating multi-index models with response-conditional least squares
T. Klock
A. Lanteri
Stefano Vigogna
59
8
0
10 Mar 2020
Taylorized Training: Towards Better Approximation of Neural Network
  Training at Finite Width
Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width
Yu Bai
Ben Krause
Huan Wang
Caiming Xiong
R. Socher
74
22
0
10 Feb 2020
Avoiding Spurious Local Minima in Deep Quadratic Networks
Avoiding Spurious Local Minima in Deep Quadratic Networks
A. Kazemipour
Brett W. Larsen
S. Druckmann
ODL
60
6
0
31 Dec 2019
Revisiting Landscape Analysis in Deep Neural Networks: Eliminating
  Decreasing Paths to Infinity
Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity
Shiyu Liang
Ruoyu Sun
R. Srikant
81
20
0
31 Dec 2019
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
137
169
0
19 Dec 2019
Sub-Optimal Local Minima Exist for Neural Networks with Almost All
  Non-Linear Activations
Sub-Optimal Local Minima Exist for Neural Networks with Almost All Non-Linear Activations
Tian Ding
Dawei Li
Ruoyu Sun
95
13
0
04 Nov 2019
Robust and Resource Efficient Identification of Two Hidden Layer Neural
  Networks
Robust and Resource Efficient Identification of Two Hidden Layer Neural Networks
M. Fornasier
T. Klock
Michael Rauchensteiner
73
18
0
30 Jun 2019
Decoupling Gating from Linearity
Decoupling Gating from Linearity
Jonathan Fiat
Eran Malach
Shai Shalev-Shwartz
128
28
0
12 Jun 2019
A Selective Overview of Deep Learning
A Selective Overview of Deep Learning
Jianqing Fan
Cong Ma
Yiqiao Zhong
BDLVLM
206
135
0
10 Apr 2019
Learning Two Layer Rectified Neural Networks in Polynomial Time
Learning Two Layer Rectified Neural Networks in Polynomial Time
Ainesh Bakshi
Rajesh Jayaram
David P. Woodruff
NoLa
186
70
0
05 Nov 2018
On the Convergence Rate of Training Recurrent Neural Networks
On the Convergence Rate of Training Recurrent Neural Networks
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
253
193
0
29 Oct 2018
The Mismatch Principle: The Generalized Lasso Under Large Model
  Uncertainties
The Mismatch Principle: The Generalized Lasso Under Large Model Uncertainties
Martin Genzel
Gitta Kutyniok
42
2
0
20 Aug 2018
Tensor Methods for Additive Index Models under Discordance and
  Heterogeneity
Tensor Methods for Additive Index Models under Discordance and Heterogeneity
Krishnakumar Balasubramanian
Jianqing Fan
Zhuoran Yang
52
13
0
17 Jul 2018
End-to-end Learning of a Convolutional Neural Network via Deep Tensor
  Decomposition
End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition
Samet Oymak
Mahdi Soltanolkotabi
87
12
0
16 May 2018
Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross
  Entropy
Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy
H. Fu
Yuejie Chi
Yingbin Liang
FedML
99
39
0
18 Feb 2018
Spurious Valleys in Two-layer Neural Network Optimization Landscapes
Spurious Valleys in Two-layer Neural Network Optimization Landscapes
Luca Venturi
Afonso S. Bandeira
Joan Bruna
97
74
0
18 Feb 2018
1