ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.05705
  4. Cited By
DNN Training Acceleration via Exploring GPGPU Friendly Sparsity

DNN Training Acceleration via Exploring GPGPU Friendly Sparsity

11 March 2022
Zhuoran Song
Yihong Xu
Han Li
Naifeng Jing
Xiaoyao Liang
Li Jiang
ArXivPDFHTML

Papers citing "DNN Training Acceleration via Exploring GPGPU Friendly Sparsity"

3 / 3 papers shown
Title
On Efficient Training of Large-Scale Deep Learning Models: A Literature
  Review
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Li Shen
Yan Sun
Zhiyuan Yu
Liang Ding
Xinmei Tian
Dacheng Tao
VLM
24
39
0
07 Apr 2023
Accelerating convolutional neural network by exploiting sparsity on GPUs
Accelerating convolutional neural network by exploiting sparsity on GPUs
Weizhi Xu
Yintai Sun
Shengyu Fan
Hui Yu
Xin Fu
9
7
0
22 Sep 2019
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
302
1,046
0
10 Feb 2017
1