ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.00075
  4. Cited By
Learned Threshold Pruning
v1v2 (latest)

Learned Threshold Pruning

28 February 2020
K. Azarian
Brandon Smart
Jinwon Lee
Tijmen Blankevoort
    MQ
ArXiv (abs)PDFHTML

Papers citing "Learned Threshold Pruning"

18 / 18 papers shown
Combining Relevance and Magnitude for Resource-Aware DNN Pruning
Combining Relevance and Magnitude for Resource-Aware DNN Pruning
C. Chiasserini
F. Malandrino
Nuria Molner
Zhiqiang Zhao
321
1
0
21 May 2024
Breaking through Deterministic Barriers: Randomized Pruning Mask
  Generation and Selection
Breaking through Deterministic Barriers: Randomized Pruning Mask Generation and Selection
Jianwei Li
Weizhi Gao
Qi Lei
Dongkuan Xu
398
4
0
19 Oct 2023
Learning Activation Functions for Sparse Neural Networks
Learning Activation Functions for Sparse Neural Networks
Mohammad Loni
Aditya Mohan
Mehdi Asadi
Marius Lindauer
293
6
0
18 May 2023
AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
Abhisek Kundu
Naveen Mellempudi
Dharma Teja Vooturi
Bharat Kaul
Pradeep Dubey
260
2
0
14 Apr 2023
What Matters In The Structured Pruning of Generative Language Models?
What Matters In The Structured Pruning of Generative Language Models?
Michael Santacroce
Zixin Wen
Yelong Shen
Yuan-Fang Li
299
39
0
07 Feb 2023
A Novel Sparse Regularizer
A Novel Sparse Regularizer
Hovig Bayandorian
387
0
0
18 Jan 2023
DASS: Differentiable Architecture Search for Sparse neural networks
DASS: Differentiable Architecture Search for Sparse neural networksACM Transactions on Embedded Computing Systems (TECS), 2022
H. Mousavi
Mohammad Loni
Mina Alibeigi
Masoud Daneshtalab
537
17
0
14 Jul 2022
Spartan: Differentiable Sparsity via Regularized Transportation
Spartan: Differentiable Sparsity via Regularized TransportationNeural Information Processing Systems (NeurIPS), 2022
Kai Sheng Tai
Taipeng Tian
Ser-Nam Lim
330
13
0
27 May 2022
Accelerating Attention through Gradient-Based Learned Runtime Pruning
Accelerating Attention through Gradient-Based Learned Runtime PruningInternational Symposium on Computer Architecture (ISCA), 2022
Zheng Li
Soroush Ghodrati
Amir Yazdanbakhsh
H. Esmaeilzadeh
Mingu Kang
293
24
0
07 Apr 2022
SD-Conv: Towards the Parameter-Efficiency of Dynamic Convolution
SD-Conv: Towards the Parameter-Efficiency of Dynamic ConvolutionIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022
Shwai He
Chenbo Jiang
Daize Dong
Liang Ding
272
7
0
05 Apr 2022
Cyclical Pruning for Sparse Neural Networks
Cyclical Pruning for Sparse Neural Networks
Suraj Srinivas
Andrey Kuzmin
Markus Nagel
M. V. Baalen
Andrii Skliar
Tijmen Blankevoort
264
17
0
02 Feb 2022
Understanding the Dynamics of DNNs Using Graph Modularity
Understanding the Dynamics of DNNs Using Graph Modularity
Yao Lu
Wen Yang
Yunzhe Zhang
Zuohui Chen
Jinyin Chen
Qi Xuan
Zhen Wang
Xiaoniu Yang
280
29
0
24 Nov 2021
An Expectation-Maximization Perspective on Federated Learning
An Expectation-Maximization Perspective on Federated Learning
Christos Louizos
M. Reisser
Joseph B. Soriaga
Max Welling
FedML
242
12
0
19 Nov 2021
Learning Pruned Structure and Weights Simultaneously from Scratch: an
  Attention based Approach
Learning Pruned Structure and Weights Simultaneously from Scratch: an Attention based ApproachBigData Congress [Services Society] (BSS), 2021
Qisheng He
Weisong Shi
Ming Dong
283
3
0
01 Nov 2021
Efficient and Sparse Neural Networks by Pruning Weights in a
  Multiobjective Learning Approach
Efficient and Sparse Neural Networks by Pruning Weights in a Multiobjective Learning ApproachComputers & Operations Research (Comput. Oper. Res.), 2020
Malena Reiners
K. Klamroth
Michael Stiglmayr
206
21
0
31 Aug 2020
HALO: Learning to Prune Neural Networks with Shrinkage
HALO: Learning to Prune Neural Networks with Shrinkage
Skyler Seto
M. Wells
Wenyu Zhang
415
0
0
24 Aug 2020
Structured Convolutions for Efficient Neural Network Design
Structured Convolutions for Efficient Neural Network Design
Brandon Smart
Yizhe Zhang
J. Lin
Fatih Porikli
260
8
0
06 Aug 2020
Soft Threshold Weight Reparameterization for Learnable Sparsity
Soft Threshold Weight Reparameterization for Learnable SparsityInternational Conference on Machine Learning (ICML), 2020
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
848
266
0
08 Feb 2020
1
Page 1 of 1