ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.03231
  4. Cited By
Soft Threshold Weight Reparameterization for Learnable Sparsity

Soft Threshold Weight Reparameterization for Learnable Sparsity

8 February 2020
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
ArXivPDFHTML

Papers citing "Soft Threshold Weight Reparameterization for Learnable Sparsity"

42 / 42 papers shown
Title
Advancing Weight and Channel Sparsification with Enhanced Saliency
Advancing Weight and Channel Sparsification with Enhanced Saliency
Xinglong Sun
Maying Shen
Hongxu Yin
Lei Mao
Pavlo Molchanov
Jose M. Alvarez
48
1
0
05 Feb 2025
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
109
0
0
04 Feb 2025
Symmetric Pruning of Large Language Models
Symmetric Pruning of Large Language Models
Kai Yi
Peter Richtárik
AAML
VLM
57
0
0
31 Jan 2025
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Andy Li
A. Durrant
Milan Markovic
Lu Yin
Georgios Leontidis
Tianlong Chen
Lu Yin
Georgios Leontidis
75
0
0
20 Nov 2024
Mask in the Mirror: Implicit Sparsification
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
45
3
0
19 Aug 2024
AdapMTL: Adaptive Pruning Framework for Multitask Learning Model
AdapMTL: Adaptive Pruning Framework for Multitask Learning Model
Mingcan Xiang
Steven Jiaxun Tang
Qizheng Yang
Hui Guan
Tongping Liu
VLM
34
0
0
07 Aug 2024
SurvReLU: Inherently Interpretable Survival Analysis via Deep ReLU
  Networks
SurvReLU: Inherently Interpretable Survival Analysis via Deep ReLU Networks
Xiaotong Sun
Peijie Qiu
Shengfan Zhang
OffRL
29
0
0
19 Jul 2024
A Thorough Performance Benchmarking on Lightweight Embedding-based Recommender Systems
A Thorough Performance Benchmarking on Lightweight Embedding-based Recommender Systems
Hung Vinh Tran
Tong Chen
Quoc Viet Hung Nguyen
Zi-Rui Huang
Lizhen Cui
Hongzhi Yin
41
1
0
25 Jun 2024
Fast and Controllable Post-training Sparsity: Learning Optimal Sparsity
  Allocation with Global Constraint in Minutes
Fast and Controllable Post-training Sparsity: Learning Optimal Sparsity Allocation with Global Constraint in Minutes
Ruihao Gong
Yang Yong
Zining Wang
Jinyang Guo
Xiuying Wei
Yuqing Ma
Xianglong Liu
31
5
0
09 May 2024
MaxQ: Multi-Axis Query for N:M Sparsity Network
MaxQ: Multi-Axis Query for N:M Sparsity Network
Jingyang Xiang
Siqi Li
Junhao Chen
Zhuangzhi Chen
Tianxin Huang
Linpeng Peng
Yong-Jin Liu
16
0
0
12 Dec 2023
NOLA: Compressing LoRA using Linear Combination of Random Basis
NOLA: Compressing LoRA using Linear Combination of Random Basis
Soroush Abbasi Koohpayegani
K. Navaneet
Parsa Nooralinejad
Soheil Kolouri
Hamed Pirsiavash
40
12
0
04 Oct 2023
A Simple and Effective Pruning Approach for Large Language Models
A Simple and Effective Pruning Approach for Large Language Models
Mingjie Sun
Zhuang Liu
Anna Bair
J. Zico Kolter
56
355
0
20 Jun 2023
Magnitude Attention-based Dynamic Pruning
Magnitude Attention-based Dynamic Pruning
Jihye Back
Namhyuk Ahn
Jang-Hyun Kim
28
2
0
08 Jun 2023
Sparsified Model Zoo Twins: Investigating Populations of Sparsified
  Neural Network Models
Sparsified Model Zoo Twins: Investigating Populations of Sparsified Neural Network Models
D. Honegger
Konstantin Schurholt
Damian Borth
28
4
0
26 Apr 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
31
19
0
06 Apr 2023
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Shiwei Liu
Tianlong Chen
Zhenyu (Allen) Zhang
Xuxi Chen
Tianjin Huang
Ajay Jaiswal
Zhangyang Wang
29
29
0
03 Mar 2023
Balanced Training for Sparse GANs
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
34
9
0
28 Feb 2023
Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
Riade Benbaki
Wenyu Chen
X. Meng
Hussein Hazimeh
Natalia Ponomareva
Zhe Zhao
Rahul Mazumder
18
26
0
28 Feb 2023
Dynamic Sparse Network for Time Series Classification: Learning What to
  "see''
Dynamic Sparse Network for Time Series Classification: Learning What to "see''
Qiao Xiao
Boqian Wu
Yu Zhang
Shiwei Liu
Mykola Pechenizkiy
Elena Mocanu
D. Mocanu
AI4TS
32
28
0
19 Dec 2022
Dynamic Sparse Training via Balancing the Exploration-Exploitation
  Trade-off
Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off
Shaoyi Huang
Bowen Lei
Dongkuan Xu
Hongwu Peng
Yue Sun
Mimi Xie
Caiwen Ding
21
19
0
30 Nov 2022
You Can Have Better Graph Neural Networks by Not Training Weights at
  All: Finding Untrained GNNs Tickets
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets
Tianjin Huang
Tianlong Chen
Meng Fang
Vlado Menkovski
Jiaxu Zhao
...
Yulong Pei
D. Mocanu
Zhangyang Wang
Mykola Pechenizkiy
Shiwei Liu
GNN
39
14
0
28 Nov 2022
On the optimization and pruning for Bayesian deep learning
On the optimization and pruning for Bayesian deep learning
X. Ke
Yanan Fan
BDL
UQCV
27
1
0
24 Oct 2022
Towards Sparsification of Graph Neural Networks
Towards Sparsification of Graph Neural Networks
Hongwu Peng
Deniz Gurevin
Shaoyi Huang
Tong Geng
Weiwen Jiang
O. Khan
Caiwen Ding
GNN
30
24
0
11 Sep 2022
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
Edouard Yvinec
Arnaud Dapogny
Matthieu Cord
Kévin Bailly
40
9
0
08 Jul 2022
Spartan: Differentiable Sparsity via Regularized Transportation
Spartan: Differentiable Sparsity via Regularized Transportation
Kai Sheng Tai
Taipeng Tian
Ser-Nam Lim
20
11
0
27 May 2022
Compression-aware Training of Neural Networks using Frank-Wolfe
Compression-aware Training of Neural Networks using Frank-Wolfe
Max Zimmer
Christoph Spiegel
S. Pokutta
16
9
0
24 May 2022
A Comprehensive Survey on Automated Machine Learning for Recommendations
A Comprehensive Survey on Automated Machine Learning for Recommendations
Bo Chen
Xiangyu Zhao
Yejing Wang
Wenqi Fan
Huifeng Guo
Ruiming Tang
AI4TS
26
6
0
04 Apr 2022
UDC: Unified DNAS for Compressible TinyML Models
UDC: Unified DNAS for Compressible TinyML Models
Igor Fedorov
Ramon Matas
Hokchhay Tann
Chu Zhou
Matthew Mattina
P. Whatmough
AI4CE
21
13
0
15 Jan 2022
How Well Do Sparse Imagenet Models Transfer?
How Well Do Sparse Imagenet Models Transfer?
Eugenia Iofinova
Alexandra Peste
Mark Kurtz
Dan Alistarh
19
38
0
26 Nov 2021
Efficient Neural Network Training via Forward and Backward Propagation
  Sparsification
Efficient Neural Network Training via Forward and Backward Propagation Sparsification
Xiao Zhou
Weizhong Zhang
Zonghao Chen
Shizhe Diao
Tong Zhang
29
46
0
10 Nov 2021
SMOF: Squeezing More Out of Filters Yields Hardware-Friendly CNN Pruning
SMOF: Squeezing More Out of Filters Yields Hardware-Friendly CNN Pruning
Yanli Liu
Bochen Guan
Qinwen Xu
Weiyi Li
Shuxue Quan
25
2
0
21 Oct 2021
Powerpropagation: A sparsity inducing weight reparameterisation
Powerpropagation: A sparsity inducing weight reparameterisation
Jonathan Richard Schwarz
Siddhant M. Jayakumar
Razvan Pascanu
P. Latham
Yee Whye Teh
90
54
0
01 Oct 2021
M-FAC: Efficient Matrix-Free Approximations of Second-Order Information
M-FAC: Efficient Matrix-Free Approximations of Second-Order Information
Elias Frantar
Eldar Kurtic
Dan Alistarh
13
57
0
07 Jul 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
34
111
0
19 Jun 2021
Effective Sparsification of Neural Networks with Global Sparsity
  Constraint
Effective Sparsification of Neural Networks with Global Sparsity Constraint
Xiao Zhou
Weizhong Zhang
Hang Xu
Tong Zhang
19
61
0
03 May 2021
Post-training deep neural network pruning via layer-wise calibration
Post-training deep neural network pruning via layer-wise calibration
Ivan Lazarevich
Alexander Kozlov
Nikita Malinin
3DPC
16
25
0
30 Apr 2021
Lottery Jackpots Exist in Pre-trained Models
Lottery Jackpots Exist in Pre-trained Models
Yu-xin Zhang
Mingbao Lin
Yan Wang
Fei Chao
Rongrong Ji
30
15
0
18 Apr 2021
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
  Accuracy
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
Lucas Liebenwein
Cenk Baykal
Brandon Carter
David K Gifford
Daniela Rus
AAML
29
71
0
04 Mar 2021
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Aojun Zhou
Yukun Ma
Junnan Zhu
Jianbo Liu
Zhijie Zhang
Kun Yuan
Wenxiu Sun
Hongsheng Li
50
239
0
08 Feb 2021
Learnable Embedding Sizes for Recommender Systems
Learnable Embedding Sizes for Recommender Systems
Siyi Liu
Chen Gao
Yihong Chen
Depeng Jin
Yong Li
59
82
0
19 Jan 2021
Progressive Skeletonization: Trimming more fat from a network at
  initialization
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip H. S. Torr
Grégory Rogez
P. Dokania
21
95
0
16 Jun 2020
Sparse Weight Activation Training
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
32
72
0
07 Jan 2020
1