ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.00748
  4. Cited By
TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network
  Training and Inference

TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference

Micro (MICRO), 2020
1 September 2020
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Omar Mohamed Awad
Gennady Pekhimenko
Jorge Albericio
Andreas Moshovos
    MoE
ArXiv (abs)PDFHTML

Papers citing "TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference"

18 / 18 papers shown
A Survey: Collaborative Hardware and Software Design in the Era of Large
  Language Models
A Survey: Collaborative Hardware and Software Design in the Era of Large Language ModelsIEEE Circuits and Systems Magazine (IEEE CSM), 2024
Cong Guo
Feng Cheng
Zhixu Du
James Kiessling
Jonathan Ku
...
Qilin Zheng
Guanglei Zhou
Hai
Li-Wei Li
Yiran Chen
226
19
0
08 Oct 2024
The Impact of Uniform Inputs on Activation Sparsity and Energy-Latency
  Attacks in Computer Vision
The Impact of Uniform Inputs on Activation Sparsity and Energy-Latency Attacks in Computer Vision
Andreas Müller
Erwin Quiring
AAML
235
4
0
27 Mar 2024
BIM: Block-Wise Self-Supervised Learning with Masked Image Modeling
BIM: Block-Wise Self-Supervised Learning with Masked Image Modeling
Yixuan Luo
Mengye Ren
Sai Qian Zhang
199
1
0
28 Nov 2023
CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device
  Learning
CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device LearningInternational Symposium on High-Performance Computer Architecture (HPCA), 2023
Sai Qian Zhang
Thierry Tambe
Nestor Cuevas
Gu-Yeon Wei
David Brooks
250
9
0
04 May 2023
Laplacian Pyramid-like Autoencoder
Laplacian Pyramid-like Autoencoder
Sangjun Han
Taeil Hur
Youngmi Hur
291
3
0
26 Aug 2022
DNNShield: Dynamic Randomized Model Sparsification, A Defense Against
  Adversarial Machine Learning
DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning
Mohammad Hossein Samavatian
Saikat Majumdar
Kristin Barber
R. Teodorescu
AAML
154
2
0
31 Jul 2022
Energy awareness in low precision neural networks
Energy awareness in low precision neural networks
Nurit Spingarn-Eliezer
Ron Banner
Elad Hoffer
Hilla Ben-Yaacov
T. Michaeli
292
0
0
06 Feb 2022
Accelerating DNN Training with Structured Data Gradient Pruning
Accelerating DNN Training with Structured Data Gradient PruningInternational Conference on Pattern Recognition (ICPR), 2022
Bradley McDanel
Helia Dinh
J. Magallanes
126
12
0
01 Feb 2022
BitTrain: Sparse Bitmap Compression for Memory-Efficient Training on the
  Edge
BitTrain: Sparse Bitmap Compression for Memory-Efficient Training on the EdgeIFIP International Information Security Conference (IFIP SEC), 2021
Abdelrahman I. Hosny
Marina Neseem
Sherief Reda
MQ
302
4
0
29 Oct 2021
FAST: DNN Training Under Variable Precision Block Floating Point with
  Stochastic Rounding
FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic RoundingInternational Symposium on High-Performance Computer Architecture (HPCA), 2021
Shanghang Zhang
Bradley McDanel
H. T. Kung
MQ
149
88
0
28 Oct 2021
MERCURY: Accelerating DNN Training By Exploiting Input Similarity
MERCURY: Accelerating DNN Training By Exploiting Input SimilarityInternational Symposium on High-Performance Computer Architecture (HPCA), 2021
Vahid Janfaza
Kevin Weston
Moein Razavi
Shantanu Mandal
Farabi Mahmud
Alex Hilty
A. Muzahid
247
6
0
28 Oct 2021
Shift-BNN: Highly-Efficient Probabilistic Bayesian Neural Network
  Training via Memory-Friendly Pattern Retrieving
Shift-BNN: Highly-Efficient Probabilistic Bayesian Neural Network Training via Memory-Friendly Pattern Retrieving
Qiyu Wan
Haojun Xia
Xingyao Zhang
Lening Wang
Shuaiwen Leon Song
Xin Fu
OOD
136
10
0
07 Oct 2021
S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN
  Acceleration
S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN AccelerationInternational Symposium on High-Performance Computer Architecture (HPCA), 2021
Zhi-Gang Liu
P. Whatmough
Yuhao Zhu
Matthew Mattina
MQ
196
102
0
16 Jul 2021
HASI: Hardware-Accelerated Stochastic Inference, A Defense Against
  Adversarial Machine Learning Attacks
HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks
Mohammad Hossein Samavatian
Saikat Majumdar
Kristin Barber
R. Teodorescu
AAML
407
4
0
09 Jun 2021
HASCO: Towards Agile HArdware and Software CO-design for Tensor
  Computation
HASCO: Towards Agile HArdware and Software CO-design for Tensor ComputationInternational Symposium on Computer Architecture (ISCA), 2021
Qingcheng Xiao
Wenlei Bao
Bingzhe Wu
Pengcheng Xu
Xuehai Qian
Yun Liang
268
75
0
04 May 2021
Demystifying BERT: Implications for Accelerator Design
Demystifying BERT: Implications for Accelerator Design
Suchita Pati
Shaizeen Aga
Nuwan Jayasena
Matthew D. Sinclair
LLMAG
197
16
0
14 Apr 2021
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and
  Head Pruning
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head PruningInternational Symposium on High-Performance Computer Architecture (HPCA), 2020
Hanrui Wang
Zhekai Zhang
Song Han
487
494
0
17 Dec 2020
Accelerating convolutional neural network by exploiting sparsity on GPUs
Accelerating convolutional neural network by exploiting sparsity on GPUsACM Transactions on Architecture and Code Optimization (TACO) (TACO), 2019
Weizhi Xu
Yintai Sun
Shengyu Fan
Hui Yu
Xin Fu
335
8
0
22 Sep 2019
1