ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.10976
  4. Cited By
Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network
  Training

Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training

23 September 2020
Dingqing Yang
Amin Ghasemazar
X. Ren
Maximilian Golub
G. Lemieux
Mieszko Lis
ArXivPDFHTML

Papers citing "Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training"

11 / 11 papers shown
Title
SAFT: Towards Out-of-Distribution Generalization in Fine-Tuning
SAFT: Towards Out-of-Distribution Generalization in Fine-Tuning
Bac Nguyen
Stefan Uhlich
Fabien Cardinaux
Lukas Mauch
Marzieh Edraki
Aaron Courville
OODD
CLL
VLM
57
3
0
03 Jul 2024
Workload-Aware Hardware Accelerator Mining for Distributed Deep Learning
  Training
Workload-Aware Hardware Accelerator Mining for Distributed Deep Learning Training
Muhammad Adnan
Amar Phanishayee
Janardhan Kulkarni
Prashant J. Nair
Divyat Mahajan
45
0
0
23 Apr 2024
TinyCL: An Efficient Hardware Architecture for Continual Learning on Autonomous Systems
TinyCL: An Efficient Hardware Architecture for Continual Learning on Autonomous Systems
Eugenio Ressa
Alberto Marchisio
Maurizio Martina
Guido Masera
Muhammad Shafique
43
0
0
15 Feb 2024
SparseProp: Efficient Sparse Backpropagation for Faster Training of
  Neural Networks
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks
Mahdi Nikdan
Tommaso Pegolotti
Eugenia Iofinova
Eldar Kurtic
Dan Alistarh
26
11
0
09 Feb 2023
Sparseloop: An Analytical Approach To Sparse Tensor Accelerator Modeling
Sparseloop: An Analytical Approach To Sparse Tensor Accelerator Modeling
Yannan Nellie Wu
Po-An Tsai
A. Parashar
Vivienne Sze
J. Emer
25
57
0
12 May 2022
FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block
  Floating Point Support
FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support
Seock-Hwan Noh
Jahyun Koo
Seunghyun Lee
Jongse Park
Jaeha Kung
AI4CE
32
17
0
13 Mar 2022
Sparsity Winning Twice: Better Robust Generalization from More Efficient
  Training
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Tianlong Chen
Zhenyu (Allen) Zhang
Pengju Wang
Santosh Balachandra
Haoyu Ma
Zehao Wang
Zhangyang Wang
OOD
AAML
92
47
0
20 Feb 2022
Accelerating DNN Training with Structured Data Gradient Pruning
Accelerating DNN Training with Structured Data Gradient Pruning
Bradley McDanel
Helia Dinh
J. Magallanes
17
7
0
01 Feb 2022
Dynamic Neural Network Architectural and Topological Adaptation and
  Related Methods -- A Survey
Dynamic Neural Network Architectural and Topological Adaptation and Related Methods -- A Survey
Lorenz Kummer
AI4CE
40
0
0
28 Jul 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
40
111
0
19 Jun 2021
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and
  Head Pruning
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning
Hanrui Wang
Zhekai Zhang
Song Han
43
377
0
17 Dec 2020
1