ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.13595
  4. Cited By
SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional
  Neural Networks Training

SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training

21 July 2020
Pengcheng Dai
Jianlei Yang
Xucheng Ye
Xingzhou Cheng
Junyu Luo
Linghao Song
Yiran Chen
Weisheng Zhao
ArXivPDFHTML

Papers citing "SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training"

3 / 3 papers shown
Title
PLUM: Improving Inference Efficiency By Leveraging Repetition-Sparsity Trade-Off
PLUM: Improving Inference Efficiency By Leveraging Repetition-Sparsity Trade-Off
Sachit Kuhar
Yash Jain
Alexey Tumanov
MQ
54
0
0
04 Dec 2023
TinyFormer: Efficient Transformer Design and Deployment on Tiny Devices
TinyFormer: Efficient Transformer Design and Deployment on Tiny Devices
Jianlei Yang
Jiacheng Liao
Fanding Lei
Meichen Liu
Junyi Chen
Lingkun Long
Han Wan
Bei Yu
Weisheng Zhao
MoE
33
2
0
03 Nov 2023
Signed Binary Weight Networks
Sachit Kuhar
Alexey Tumanov
Judy Hoffman
MQ
13
1
0
25 Nov 2022
1