Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2007.13595
Cited By
SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training
21 July 2020
Pengcheng Dai
Jianlei Yang
Xucheng Ye
Xingzhou Cheng
Junyu Luo
Linghao Song
Yiran Chen
Weisheng Zhao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training"
3 / 3 papers shown
Title
PLUM: Improving Inference Efficiency By Leveraging Repetition-Sparsity Trade-Off
Sachit Kuhar
Yash Jain
Alexey Tumanov
MQ
54
0
0
04 Dec 2023
TinyFormer: Efficient Transformer Design and Deployment on Tiny Devices
Jianlei Yang
Jiacheng Liao
Fanding Lei
Meichen Liu
Junyi Chen
Lingkun Long
Han Wan
Bei Yu
Weisheng Zhao
MoE
33
2
0
03 Nov 2023
Signed Binary Weight Networks
Sachit Kuhar
Alexey Tumanov
Judy Hoffman
MQ
13
1
0
25 Nov 2022
1