ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.08119
  4. Cited By
Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks

Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks

19 December 2018
A. Yaguchi
Taiji Suzuki
Wataru Asano
Shuhei Nitta
Y. Sakata
A. Tanizawa
ArXiv (abs)PDFHTML

Papers citing "Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks"

7 / 7 papers shown
Spiking Synaptic Penalty: Appropriate Penalty Term for Energy-Efficient
  Spiking Neural Networks
Spiking Synaptic Penalty: Appropriate Penalty Term for Energy-Efficient Spiking Neural Networks
Kazuma Suetake
Takuya Ushimaru
Ryuji Saiin
Yoshihide Sawada
274
0
0
03 Feb 2023
The Impact of Activation Sparsity on Overfitting in Convolutional Neural
  Networks
The Impact of Activation Sparsity on Overfitting in Convolutional Neural Networks
Karim Huesmann
Luis Garcia Rodriguez
Lars Linsen
Benjamin Risse
154
4
0
13 Apr 2021
Schizophrenia-mimicking layers outperform conventional neural network
  layers
Schizophrenia-mimicking layers outperform conventional neural network layersFrontiers in Neurorobotics (FN), 2020
R. Mizutani
Senta Noguchi
R. Saiga
Yuichi Yamashita
M. Miyashita
Makoto Arai
M. Itokawa
167
5
0
23 Sep 2020
Exploiting the Full Capacity of Deep Neural Networks while Avoiding
  Overfitting by Targeted Sparsity Regularization
Exploiting the Full Capacity of Deep Neural Networks while Avoiding Overfitting by Targeted Sparsity Regularization
Karim Huesmann
Soeren Klemm
Lars Linsen
Benjamin Risse
115
2
0
21 Feb 2020
How Does BN Increase Collapsed Neural Network Filters?
How Does BN Increase Collapsed Neural Network Filters?
Sheng Zhou
Xinjiang Wang
Ping Luo
Xue Jiang
Wenjie Li
Wei Zhang
175
1
0
30 Jan 2020
Understanding the Effects of Pre-Training for Object Detectors via
  Eigenspectrum
Understanding the Effects of Pre-Training for Object Detectors via Eigenspectrum
Yosuke Shinya
E. Simo-Serra
Taiji Suzuki
155
13
0
09 Sep 2019
On Implicit Filter Level Sparsity in Convolutional Neural Networks
On Implicit Filter Level Sparsity in Convolutional Neural Networks
Dushyant Mehta
K. Kim
Christian Theobalt
179
29
0
29 Nov 2018
1
Page 1 of 1