Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.00748
Cited By
TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
1 September 2020
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Omar Mohamed Awad
Gennady Pekhimenko
Jorge Albericio
Andreas Moshovos
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference"
6 / 6 papers shown
Title
DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning
Mohammad Hossein Samavatian
Saikat Majumdar
Kristin Barber
R. Teodorescu
AAML
14
2
0
31 Jul 2022
Energy awareness in low precision neural networks
Nurit Spingarn-Eliezer
Ron Banner
Elad Hoffer
Hilla Ben-Yaacov
T. Michaeli
38
0
0
06 Feb 2022
Accelerating DNN Training with Structured Data Gradient Pruning
Bradley McDanel
Helia Dinh
J. Magallanes
12
7
0
01 Feb 2022
BitTrain: Sparse Bitmap Compression for Memory-Efficient Training on the Edge
Abdelrahman I. Hosny
Marina Neseem
Sherief Reda
MQ
35
4
0
29 Oct 2021
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning
Hanrui Wang
Zhekai Zhang
Song Han
43
374
0
17 Dec 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1