ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.04947
  4. Cited By
Collapse of Deep and Narrow Neural Nets

Collapse of Deep and Narrow Neural Nets

15 August 2018
Lu Lu
Yanhui Su
George Karniadakis
    ODL
ArXivPDFHTML

Papers citing "Collapse of Deep and Narrow Neural Nets"

31 / 31 papers shown
Title
Hysteresis Activation Function for Efficient Inference
Hysteresis Activation Function for Efficient Inference
Moshe Kimhi
Idan Kashani
A. Mendelson
Chaim Baskin
LLMSV
43
0
0
15 Nov 2024
Data Topology-Dependent Upper Bounds of Neural Network Widths
Data Topology-Dependent Upper Bounds of Neural Network Widths
Sangmin Lee
Jong Chul Ye
26
0
0
25 May 2023
Empirical study of the modulus as activation function in computer vision
  applications
Empirical study of the modulus as activation function in computer vision applications
Iván Vallés-Pérez
E. Soria-Olivas
M. Martínez-Sober
Antonio J. Serrano
Joan Vila-Francés
J. Gómez-Sanchís
21
15
0
15 Jan 2023
CACTO: Continuous Actor-Critic with Trajectory Optimization -- Towards
  global optimality
CACTO: Continuous Actor-Critic with Trajectory Optimization -- Towards global optimality
Gianluigi Grandesso
Elisa Alboni
G. P. R. Papini
Patrick M. Wensing
Andrea Del Prete
30
15
0
12 Nov 2022
Nish: A Novel Negative Stimulated Hybrid Activation Function
Nish: A Novel Negative Stimulated Hybrid Activation Function
Yildiray Anagün
Ş. Işık
24
2
0
17 Oct 2022
A Hybrid Model and Learning-Based Adaptive Navigation Filter
A Hybrid Model and Learning-Based Adaptive Navigation Filter
B. Or
Itzik Klein
13
26
0
14 Jun 2022
Testing Feedforward Neural Networks Training Programs
Testing Feedforward Neural Networks Training Programs
Houssem Ben Braiek
Foutse Khomh
AAML
13
14
0
01 Apr 2022
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
34
6
0
13 Dec 2021
Sparsely Changing Latent States for Prediction and Planning in Partially
  Observable Domains
Sparsely Changing Latent States for Prediction and Planning in Partially Observable Domains
Christian Gumbsch
Martin Volker Butz
Georg Martius
AI4CE
26
21
0
29 Oct 2021
Deep neural networks with controlled variable selection for the
  identification of putative causal genetic variants
Deep neural networks with controlled variable selection for the identification of putative causal genetic variants
P. H. Kassani
Fred Lu
Yann Le Guen
Zihuai He
18
12
0
29 Sep 2021
Training Deep Spiking Auto-encoders without Bursting or Dying Neurons
  through Regularization
Training Deep Spiking Auto-encoders without Bursting or Dying Neurons through Regularization
Justus F. Hübotter
Pablo Lanillos
Jakub M. Tomczak
16
3
0
22 Sep 2021
LEA-Net: Layer-wise External Attention Network for Efficient Color
  Anomaly Detection
LEA-Net: Layer-wise External Attention Network for Efficient Color Anomaly Detection
Ryoya Katafuchi
T. Tokunaga
14
3
0
12 Sep 2021
Combining data assimilation and machine learning to estimate parameters
  of a convective-scale model
Combining data assimilation and machine learning to estimate parameters of a convective-scale model
Stefanie Legler
T. Janjić
31
18
0
07 Sep 2021
ERANNs: Efficient Residual Audio Neural Networks for Audio Pattern
  Recognition
ERANNs: Efficient Residual Audio Neural Networks for Audio Pattern Recognition
S. Verbitskiy
Vladimir Berikov
Viacheslav Vyshegorodtsev
24
73
0
03 Jun 2021
On the approximation of functions by tanh neural networks
On the approximation of functions by tanh neural networks
Tim De Ryck
S. Lanthaler
Siddhartha Mishra
26
138
0
18 Apr 2021
A proof of convergence for stochastic gradient descent in the training
  of artificial neural networks with ReLU activation for constant target
  functions
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
Arnulf Jentzen
Adrian Riekert
MLT
34
13
0
01 Apr 2021
A proof of convergence for gradient descent in the training of
  artificial neural networks for constant target functions
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions
Patrick Cheridito
Arnulf Jentzen
Adrian Riekert
Florian Rossmannek
28
24
0
19 Feb 2021
Upgraded W-Net with Attention Gates and its Application in Unsupervised
  3D Liver Segmentation
Upgraded W-Net with Attention Gates and its Application in Unsupervised 3D Liver Segmentation
Dhanunjaya Mitta
S. Chatterjee
Oliver Speck
A. Nürnberger
SSeg
MedIm
31
5
0
20 Nov 2020
Universal Activation Function For Machine Learning
Universal Activation Function For Machine Learning
Brosnan Yuen
Minh Tu Hoang
Xiaodai Dong
Tao Lu
26
40
0
07 Nov 2020
FAN: Frequency Aggregation Network for Real Image Super-resolution
FAN: Frequency Aggregation Network for Real Image Super-resolution
Yingxue Pang
Xin Li
Xin Jin
Yaojun Wu
Jianzhao Liu
Sen Liu
Zhibo Chen
36
25
0
30 Sep 2020
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
36
79
0
17 Sep 2020
What Do Neural Networks Learn When Trained With Random Labels?
What Do Neural Networks Learn When Trained With Random Labels?
Hartmut Maennel
Ibrahim M. Alabdulmohsin
Ilya O. Tolstikhin
R. Baldock
Olivier Bousquet
Sylvain Gelly
Daniel Keysers
FedML
48
87
0
18 Jun 2020
Hindsight Logging for Model Training
Hindsight Logging for Model Training
Rolando Garcia
Eric Liu
Vikram Sreekanti
Bobby Yan
Anusha Dandamudi
Joseph E. Gonzalez
J. M. Hellerstein
Koushik Sen
VLM
27
10
0
12 Jun 2020
Non-convergence of stochastic gradient descent in the training of deep
  neural networks
Non-convergence of stochastic gradient descent in the training of deep neural networks
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
14
37
0
12 Jun 2020
ActGAN: Flexible and Efficient One-shot Face Reenactment
ActGAN: Flexible and Efficient One-shot Face Reenactment
Ivan Kosarevych
Marian Petruk
Markian Kostiv
Orest Kupyn
M. Maksymenko
Volodymyr Budzan
CVBM
PICV
GAN
21
2
0
30 Mar 2020
How Does BN Increase Collapsed Neural Network Filters?
How Does BN Increase Collapsed Neural Network Filters?
Sheng Zhou
Xinjiang Wang
Ping Luo
Xue Jiang
Wenjie Li
Wei Zhang
11
1
0
30 Jan 2020
Deep Learning Models for Global Coordinate Transformations that
  Linearize PDEs
Deep Learning Models for Global Coordinate Transformations that Linearize PDEs
Craig Gin
Bethany Lusch
Steven L. Brunton
J. Nathan Kutz
13
39
0
07 Nov 2019
Unsupervised Boosting-based Autoencoder Ensembles for Outlier Detection
Unsupervised Boosting-based Autoencoder Ensembles for Outlier Detection
Hamed Sarvari
C. Domeniconi
Bardh Prenkaj
Giovanni Stilo
UQCV
20
19
0
22 Oct 2019
DeepONet: Learning nonlinear operators for identifying differential
  equations based on the universal approximation theorem of operators
DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators
Lu Lu
Pengzhan Jin
George Karniadakis
43
2,029
0
08 Oct 2019
DeepXDE: A deep learning library for solving differential equations
DeepXDE: A deep learning library for solving differential equations
Lu Lu
Xuhui Meng
Zhiping Mao
George Karniadakis
PINN
AI4CE
52
1,489
0
10 Jul 2019
Deeply learned face representations are sparse, selective, and robust
Deeply learned face representations are sparse, selective, and robust
Yi Sun
Xiaogang Wang
Xiaoou Tang
CVBM
250
921
0
03 Dec 2014
1