ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.00774
  4. Cited By
Accelerating DNN Training with Structured Data Gradient Pruning

Accelerating DNN Training with Structured Data Gradient Pruning

1 February 2022
Bradley McDanel
Helia Dinh
J. Magallanes
ArXivPDFHTML

Papers citing "Accelerating DNN Training with Structured Data Gradient Pruning"

7 / 7 papers shown
Title
Neuroplasticity in Artificial Intelligence -- An Overview and Inspirations on Drop In & Out Learning
Neuroplasticity in Artificial Intelligence -- An Overview and Inspirations on Drop In & Out Learning
Yupei Li
M. Milling
Björn Schuller
AI4CE
107
0
0
27 Mar 2025
Accelerating Transformer Pre-training with 2:4 Sparsity
Accelerating Transformer Pre-training with 2:4 Sparsity
Yuezhou Hu
Kang Zhao
Weiyu Huang
Jianfei Chen
Jun Zhu
62
7
0
02 Apr 2024
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM
  Fine-Tuning: A Benchmark
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Yihua Zhang
Pingzhi Li
Junyuan Hong
Jiaxiang Li
Yimeng Zhang
...
Wotao Yin
Mingyi Hong
Zhangyang Wang
Sijia Liu
Tianlong Chen
25
45
0
18 Feb 2024
Efficient N:M Sparse DNN Training Using Algorithm, Architecture, and
  Dataflow Co-Design
Efficient N:M Sparse DNN Training Using Algorithm, Architecture, and Dataflow Co-Design
Chao Fang
Wei Sun
Aojun Zhou
Zhongfeng Wang
11
10
0
22 Sep 2023
PruMUX: Augmenting Data Multiplexing with Model Compression
PruMUX: Augmenting Data Multiplexing with Model Compression
Yushan Su
Vishvak Murahari
Karthik Narasimhan
Keqin Li
22
3
0
24 May 2023
Minimum Variance Unbiased N:M Sparsity for the Neural Gradients
Minimum Variance Unbiased N:M Sparsity for the Neural Gradients
Brian Chmiel
Itay Hubara
Ron Banner
Daniel Soudry
17
10
0
21 Mar 2022
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
50
111
0
16 Feb 2021
1