ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.10928
  4. Cited By
Optimization Landscape and Expressivity of Deep CNNs

Optimization Landscape and Expressivity of Deep CNNs

30 October 2017
Quynh N. Nguyen
Matthias Hein
ArXivPDFHTML

Papers citing "Optimization Landscape and Expressivity of Deep CNNs"

15 / 15 papers shown
Title
Noisy Gradient Descent Converges to Flat Minima for Nonconvex Matrix
  Factorization
Noisy Gradient Descent Converges to Flat Minima for Nonconvex Matrix Factorization
Tianyi Liu
Yan Li
S. Wei
Enlu Zhou
T. Zhao
21
13
0
24 Feb 2021
Learning Graph Neural Networks with Approximate Gradient Descent
Learning Graph Neural Networks with Approximate Gradient Descent
Qunwei Li
Shaofeng Zou
Leon Wenliang Zhong
GNN
32
1
0
07 Dec 2020
Expressivity of Deep Neural Networks
Expressivity of Deep Neural Networks
Ingo Gühring
Mones Raslan
Gitta Kutyniok
16
51
0
09 Jul 2020
Beyond Linearization: On Quadratic and Higher-Order Approximation of
  Wide Neural Networks
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
J. Lee
24
116
0
03 Oct 2019
GridDehazeNet: Attention-Based Multi-Scale Network for Image Dehazing
GridDehazeNet: Attention-Based Multi-Scale Network for Image Dehazing
Xiaohong Liu
Yongrui Ma
Zhihao Shi
Jun Chen
58
740
0
08 Aug 2019
Learning One-hidden-layer Neural Networks under General Input
  Distributions
Learning One-hidden-layer Neural Networks under General Input Distributions
Weihao Gao
Ashok Vardhan Makkuva
Sewoong Oh
Pramod Viswanath
MLT
30
28
0
09 Oct 2018
A Convergence Analysis of Gradient Descent for Deep Linear Neural
  Networks
A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks
Sanjeev Arora
Nadav Cohen
Noah Golowich
Wei Hu
27
281
0
04 Oct 2018
Adding One Neuron Can Eliminate All Bad Local Minima
Adding One Neuron Can Eliminate All Bad Local Minima
Shiyu Liang
Ruoyu Sun
J. Lee
R. Srikant
39
89
0
22 May 2018
How Many Samples are Needed to Estimate a Convolutional or Recurrent
  Neural Network?
How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network?
S. Du
Yining Wang
Xiyu Zhai
Sivaraman Balakrishnan
Ruslan Salakhutdinov
Aarti Singh
SSL
21
57
0
21 May 2018
Improved Learning of One-hidden-layer Convolutional Neural Networks with
  Overlaps
Improved Learning of One-hidden-layer Convolutional Neural Networks with Overlaps
S. Du
Surbhi Goel
MLT
30
17
0
20 May 2018
Understanding the Loss Surface of Neural Networks for Binary
  Classification
Understanding the Loss Surface of Neural Networks for Binary Classification
Shiyu Liang
Ruoyu Sun
Yixuan Li
R. Srikant
35
87
0
19 Feb 2018
Global optimality conditions for deep neural networks
Global optimality conditions for deep neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
128
117
0
08 Jul 2017
ENet: A Deep Neural Network Architecture for Real-Time Semantic
  Segmentation
ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation
Adam Paszke
Abhishek Chaurasia
Sangpil Kim
Eugenio Culurciello
SSeg
235
2,059
0
07 Jun 2016
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
153
603
0
14 Feb 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
183
1,185
0
30 Nov 2014
1