ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.00173
  4. Cited By
Accelerating CNN Training by Pruning Activation Gradients
v1v2v3 (latest)

Accelerating CNN Training by Pruning Activation Gradients

1 August 2019
Xucheng Ye
Pengcheng Dai
Junyu Luo
Xin Guo
Weisheng Zhao
Jianlei Yang
Yiran Chen
ArXiv (abs)PDFHTML

Papers citing "Accelerating CNN Training by Pruning Activation Gradients"

1 / 1 papers shown
Neural gradients are near-lognormal: improved quantized and sparse
  training
Neural gradients are near-lognormal: improved quantized and sparse training
Brian Chmiel
Liad Ben-Uri
Moran Shkolnik
Elad Hoffer
Ron Banner
Daniel Soudry
MQ
294
5
0
15 Jun 2020
1
Page 1 of 1