ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.09926
12
28

Diagonalwise Refactorization: An Efficient Training Method for Depthwise Convolutions

27 March 2018
Zheng Qin
Zhaoning Zhang
Dongsheng Li
Yiming Zhang
Yuxing Peng
ArXivPDFHTML
Abstract

Depthwise convolutions provide significant performance benefits owing to the reduction in both parameters and mult-adds. However, training depthwise convolution layers with GPUs is slow in current deep learning frameworks because their implementations cannot fully utilize the GPU capacity. To address this problem, in this paper we present an efficient method (called diagonalwise refactorization) for accelerating the training of depthwise convolution layers. Our key idea is to rearrange the weight vectors of a depthwise convolution into a large diagonal weight matrix so as to convert the depthwise convolution into one single standard convolution, which is well supported by the cuDNN library that is highly-optimized for GPU computations. We have implemented our training method in five popular deep learning frameworks. Evaluation results show that our proposed method gains 15.4×15.4\times15.4× training speedup on Darknet, 8.4×8.4\times8.4× on Caffe, 5.4×5.4\times5.4× on PyTorch, 3.5×3.5\times3.5× on MXNet, and 1.4×1.4\times1.4× on TensorFlow, compared to their original implementations of depthwise convolutions.

View on arXiv
Comments on this paper