ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.06463
  4. Cited By
Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross
  Entropy

Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy

18 February 2018
H. Fu
Yuejie Chi
Yingbin Liang
    FedML
ArXivPDFHTML

Papers citing "Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy"

11 / 11 papers shown
Title
How does promoting the minority fraction affect generalization? A
  theoretical study of the one-hidden-layer neural network on group imbalance
How does promoting the minority fraction affect generalization? A theoretical study of the one-hidden-layer neural network on group imbalance
Hongkang Li
Shuai Zhang
Yihua Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
35
4
0
12 Mar 2024
A Theoretical Understanding of Shallow Vision Transformers: Learning,
  Generalization, and Sample Complexity
A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity
Hongkang Li
M. Wang
Sijia Liu
Pin-Yu Chen
ViT
MLT
35
56
0
12 Feb 2023
Local Identifiability of Deep ReLU Neural Networks: the Theory
Local Identifiability of Deep ReLU Neural Networks: the Theory
Joachim Bona-Pellissier
Franccois Malgouyres
F. Bachoc
FAtt
67
6
0
15 Jun 2022
How does unlabeled data improve generalization in self-training? A
  one-hidden-layer theoretical analysis
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
Shuai Zhang
M. Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
SSL
MLT
39
22
0
21 Jan 2022
Parameter identifiability of a deep feedforward ReLU neural network
Parameter identifiability of a deep feedforward ReLU neural network
Joachim Bona-Pellissier
François Bachoc
François Malgouyres
41
14
0
24 Dec 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
  on Pruned Neural Networks
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
21
13
0
12 Oct 2021
Efficient Sparse Coding using Hierarchical Riemannian Pursuit
Efficient Sparse Coding using Hierarchical Riemannian Pursuit
Ye Xue
Vincent K. N. Lau
Songfu Cai
23
3
0
21 Apr 2021
Denoising and Regularization via Exploiting the Structural Bias of
  Convolutional Generators
Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Reinhard Heckel
Mahdi Soltanolkotabi
DiffM
27
81
0
31 Oct 2019
Learning ReLU Networks on Linearly Separable Data: Algorithm,
  Optimality, and Generalization
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
G. Wang
G. Giannakis
Jie Chen
MLT
22
131
0
14 Aug 2018
End-to-end Learning of a Convolutional Neural Network via Deep Tensor
  Decomposition
End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition
Samet Oymak
Mahdi Soltanolkotabi
19
12
0
16 May 2018
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
136
602
0
14 Feb 2016
1