ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.10022
  4. Cited By
Learning Sparse Filters in Deep Convolutional Neural Networks with a
  l1/l2 Pseudo-Norm

Learning Sparse Filters in Deep Convolutional Neural Networks with a l1/l2 Pseudo-Norm

20 July 2020
Anthony Berthelier
Yongzhe Yan
Thierry Chateau
C. Blanc
S. Duffner
Christophe Garcia
ArXiv (abs)PDFHTML

Papers citing "Learning Sparse Filters in Deep Convolutional Neural Networks with a l1/l2 Pseudo-Norm"

2 / 2 papers shown
RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting
  and Output Merging
RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting and Output Merging
Edouard Yvinec
Arnaud Dapogny
Matthieu Cord
Kévin Bailly
356
27
0
30 Sep 2021
Schizophrenia-mimicking layers outperform conventional neural network
  layers
Schizophrenia-mimicking layers outperform conventional neural network layersFrontiers in Neurorobotics (FN), 2020
R. Mizutani
Senta Noguchi
R. Saiga
Yuichi Yamashita
M. Miyashita
Makoto Arai
M. Itokawa
201
5
0
23 Sep 2020
1
Page 1 of 1