Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.08124
Cited By
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
16 February 2021
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks"
17 / 17 papers shown
Title
FedSpaLLM: Federated Pruning of Large Language Models
Guangji Bai
Yijiang Li
Zilinghan Li
Liang Zhao
Kibaek Kim
FedML
52
3
0
20 Feb 2025
SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs
Mohammad Mozaffari
Amir Yazdanbakhsh
Zhao Zhang
M. Dehnavi
62
5
0
28 Jan 2025
Mixture Compressor for Mixture-of-Experts LLMs Gains More
Wei Huang
Yue Liao
Jianhui Liu
Ruifei He
Haoru Tan
Shiming Zhang
Hongsheng Li
Si Liu
Xiaojuan Qi
MoE
36
3
0
08 Oct 2024
Let the Code LLM Edit Itself When You Edit the Code
Zhenyu He
Jun Zhang
Shengjie Luo
Jingjing Xu
Z. Zhang
Di He
KELM
23
0
0
03 Jul 2024
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Simla Burcu Harma
Ayan Chakraborty
Elizaveta Kostenok
Danila Mishin
Dongho Ha
...
Martin Jaggi
Ming Liu
Yunho Oh
Suvinay Subramanian
Amir Yazdanbakhsh
MQ
22
4
0
31 May 2024
SparseDM: Toward Sparse Efficient Diffusion Models
Kafeng Wang
Jianfei Chen
He Li
Zhenpeng Mi
Jun-Jie Zhu
DiffM
33
8
0
16 Apr 2024
Multilingual Brain Surgeon: Large Language Models Can be Compressed Leaving No Language Behind
Hongchuan Zeng
Hongshen Xu
Lu Chen
Kai Yu
38
4
0
06 Apr 2024
Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models
Xudong Lu
Qi Liu
Yuhui Xu
Aojun Zhou
Siyuan Huang
Bo-Wen Zhang
Junchi Yan
Hongsheng Li
MoE
16
25
0
22 Feb 2024
FFSplit: Split Feed-Forward Network For Optimizing Accuracy-Efficiency Trade-off in Language Model Inference
Zirui Liu
Qingquan Song
Q. Xiao
Sathiya Keerthi Selvaraj
Rahul Mazumder
Aman Gupta
Xia Hu
10
4
0
08 Jan 2024
MaxQ: Multi-Axis Query for N:M Sparsity Network
Jingyang Xiang
Siqi Li
Junhao Chen
Zhuangzhi Chen
Tianxin Huang
Linpeng Peng
Yong-Jin Liu
9
0
0
12 Dec 2023
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Yu-xin Zhang
Lirui Zhao
Mingbao Lin
Yunyun Sun
Yiwu Yao
Xingjia Han
Jared Tanner
Shiwei Liu
Rongrong Ji
SyDa
19
40
0
13 Oct 2023
A Simple and Effective Pruning Approach for Large Language Models
Mingjie Sun
Zhuang Liu
Anna Bair
J. Zico Kolter
25
350
0
20 Jun 2023
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Vithursan Thangarasa
Shreyas Saxena
Abhay Gupta
Sean Lie
13
3
0
21 Mar 2023
Bi-directional Masks for Efficient N:M Sparse Training
Yu-xin Zhang
Yiting Luo
Mingbao Lin
Yunshan Zhong
Jingjing Xie
Fei Chao
Rongrong Ji
24
15
0
13 Feb 2023
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Shuren He
Bani Mallick
17
6
0
09 Jan 2023
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware Training
Yunshan Zhong
Gongrui Nan
Yu-xin Zhang
Fei Chao
Rongrong Ji
MQ
11
3
0
12 Nov 2022
Bimodal Distributed Binarized Neural Networks
T. Rozen
Moshe Kimhi
Brian Chmiel
A. Mendelson
Chaim Baskin
MQ
24
4
0
05 Apr 2022
1