ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.10280
  4. Cited By
Escoin: Efficient Sparse Convolutional Neural Network Inference on GPUs
v1v2 (latest)

Escoin: Efficient Sparse Convolutional Neural Network Inference on GPUs

28 February 2018
Xuhao Chen
ArXiv (abs)PDFHTML

Papers citing "Escoin: Efficient Sparse Convolutional Neural Network Inference on GPUs"

16 / 16 papers shown
Title
FIARSE: Model-Heterogeneous Federated Learning via Importance-Aware
  Submodel Extraction
FIARSE: Model-Heterogeneous Federated Learning via Importance-Aware Submodel ExtractionNeural Information Processing Systems (NeurIPS), 2024
Feijie Wu
Xingchen Wang
Yaqing Wang
Tianci Liu
Lu Su
Jing Gao
FedML
262
18
0
28 Jul 2024
STen: Productive and Efficient Sparsity in PyTorch
STen: Productive and Efficient Sparsity in PyTorch
Andrei Ivanov
Nikoli Dryden
Tal Ben-Nun
Saleh Ashkboos
Torsten Hoefler
193
6
0
15 Apr 2023
FSCNN: A Fast Sparse Convolution Neural Network Inference System
FSCNN: A Fast Sparse Convolution Neural Network Inference System
Bo Ji
Tianyi Chen
128
3
0
17 Dec 2022
Speedup deep learning models on GPU by taking advantage of efficient
  unstructured pruning and bit-width reduction
Speedup deep learning models on GPU by taking advantage of efficient unstructured pruning and bit-width reductionJournal of Computer Science (JCS), 2021
Marcin Pietroñ
Dominik Zurek
150
17
0
28 Dec 2021
On the Compression of Natural Language Models
On the Compression of Natural Language Models
S. Damadi
87
0
0
13 Dec 2021
RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting
  and Output Merging
RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting and Output Merging
Edouard Yvinec
Arnaud Dapogny
Matthieu Cord
Kévin Bailly
211
24
0
30 Sep 2021
Only Train Once: A One-Shot Neural Network Training And Pruning
  Framework
Only Train Once: A One-Shot Neural Network Training And Pruning FrameworkNeural Information Processing Systems (NeurIPS), 2021
Tianyi Chen
Bo Ji
Tianyu Ding
Biyi Fang
Guanyi Wang
Zhihui Zhu
Luming Liang
Yixin Shi
Sheng Yi
Xiao Tu
216
125
0
15 Jul 2021
High Performance Convolution Using Sparsity and Patterns for Inference
  in Deep Convolutional Neural Networks
High Performance Convolution Using Sparsity and Patterns for Inference in Deep Convolutional Neural Networks
Hossam Amer
Ahmed H. Salamah
A. Sajedi
En-Hui Yang
154
6
0
16 Apr 2021
SparseDNN: Fast Sparse Deep Learning Inference on CPUs
SparseDNN: Fast Sparse Deep Learning Inference on CPUs
Ziheng Wang
MQ
285
21
0
20 Jan 2021
When deep learning models on GPU can be accelerated by taking advantage
  of unstructured sparsity
When deep learning models on GPU can be accelerated by taking advantage of unstructured sparsity
Marcin Pietroñ
Dominik Zurek
BDL
119
1
0
12 Nov 2020
SparseRT: Accelerating Unstructured Sparsity on GPUs for Deep Learning
  Inference
SparseRT: Accelerating Unstructured Sparsity on GPUs for Deep Learning InferenceInternational Conference on Parallel Architectures and Compilation Techniques (PACT), 2020
Ziheng Wang
138
80
0
26 Aug 2020
Self-Supervised GAN Compression
Self-Supervised GAN Compression
Chong Yu
Jeff Pool
218
9
0
03 Jul 2020
TIRAMISU: A Polyhedral Compiler for Dense and Sparse Deep Learning
TIRAMISU: A Polyhedral Compiler for Dense and Sparse Deep Learning
Riyadh Baghdadi
Abdelkader Nadir Debbagh
K. Abdous
Fatima-Zohra Benhamida
Alex Renda
Jonathan Frankle
Michael Carbin
Saman P. Amarasinghe
135
22
0
07 May 2020
Accelerating convolutional neural network by exploiting sparsity on GPUs
Accelerating convolutional neural network by exploiting sparsity on GPUsACM Transactions on Architecture and Code Optimization (TACO) (TACO), 2019
Weizhi Xu
Yintai Sun
Shengyu Fan
Hui Yu
Xin Fu
299
8
0
22 Sep 2019
Accelerated CNN Training Through Gradient Approximation
Accelerated CNN Training Through Gradient Approximation
Ziheng Wang
Sree Harsha Nelaturu
648
5
0
15 Aug 2019
Low-Memory Neural Network Training: A Technical Report
Low-Memory Neural Network Training: A Technical Report
N. Sohoni
Christopher R. Aberger
Megan Leszczynski
Jian Zhang
Christopher Ré
213
110
0
24 Apr 2019
1