ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.10991
  4. Cited By
Minimum Variance Unbiased N:M Sparsity for the Neural Gradients

Minimum Variance Unbiased N:M Sparsity for the Neural Gradients

21 March 2022
Brian Chmiel
Itay Hubara
Ron Banner
Daniel Soudry
ArXivPDFHTML

Papers citing "Minimum Variance Unbiased N:M Sparsity for the Neural Gradients"

11 / 11 papers shown
Title
S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-training
S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-training
Yuezhou Hu
Jun-Jie Zhu
Jianfei Chen
36
0
0
13 Sep 2024
Accelerating Transformer Pre-training with 2:4 Sparsity
Accelerating Transformer Pre-training with 2:4 Sparsity
Yuezhou Hu
Kang Zhao
Weiyu Huang
Jianfei Chen
Jun Zhu
57
7
0
02 Apr 2024
Abstracting Sparse DNN Acceleration via Structured Sparse Tensor
  Decomposition
Abstracting Sparse DNN Acceleration via Structured Sparse Tensor Decomposition
Geonhwa Jeong
Po-An Tsai
A. Bambhaniya
S. Keckler
Tushar Krishna
25
7
0
12 Mar 2024
Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of
  Language Model
Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model
Zirui Liu
Guanchu Wang
Shaochen Zhong
Zhaozhuo Xu
Daochen Zha
...
Zhimeng Jiang
Kaixiong Zhou
V. Chaudhary
Shuai Xu
Xia Hu
30
11
0
24 May 2023
HighLight: Efficient and Flexible DNN Acceleration with Hierarchical
  Structured Sparsity
HighLight: Efficient and Flexible DNN Acceleration with Hierarchical Structured Sparsity
Yannan Nellie Wu
Po-An Tsai
Saurav Muralidharan
A. Parashar
Vivienne Sze
J. Emer
21
23
0
22 May 2023
Bi-directional Masks for Efficient N:M Sparse Training
Bi-directional Masks for Efficient N:M Sparse Training
Yu-xin Zhang
Yiting Luo
Mingbao Lin
Yunshan Zhong
Jingjing Xie
Fei Chao
Rongrong Ji
44
15
0
13 Feb 2023
STEP: Learning N:M Structured Sparsity Masks from Scratch with
  Precondition
STEP: Learning N:M Structured Sparsity Masks from Scratch with Precondition
Yucheng Lu
Shivani Agrawal
Suvinay Subramanian
Oleg Rybakov
Chris De Sa
Amir Yazdanbakhsh
16
16
0
02 Feb 2023
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware
  Training
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware Training
Yunshan Zhong
Gongrui Nan
Yu-xin Zhang
Fei Chao
Rongrong Ji
MQ
18
3
0
12 Nov 2022
Learning Best Combination for Efficient N:M Sparsity
Learning Best Combination for Efficient N:M Sparsity
Yu-xin Zhang
Mingbao Lin
Zhihang Lin
Yiting Luo
Ke Li
Fei Chao
Yongjian Wu
Rongrong Ji
24
49
0
14 Jun 2022
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
44
110
0
16 Feb 2021
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
222
382
0
05 Mar 2020
1