ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.07284
17
5

A flexible FPGA accelerator for convolutional neural networks

16 December 2019
Kingshuk Majumder
Uday Bondhugula
ArXivPDFHTML
Abstract

Though CNNs are highly parallel workloads, in the absence of efficient on-chip memory reuse techniques, an accelerator for them quickly becomes memory bound. In this paper, we propose a CNN accelerator design for inference that is able to exploit all forms of reuse available to minimize off-chip memory access while increasing utilization of available resources. The proposed design is composed of cores, each of which contains a one-dimensional array of processing elements. These cores can exploit different types of reuse available in CNN layers of varying shapes without requiring any reconfiguration; in particular, our design minimizes underutilization due to problem sizes that are not perfect multiples of the underlying hardware array dimensions. A major obstacle in the adoption of FPGAs as a platform for CNN inference is the difficulty to program these devices using hardware description languages. Our end goal is to also address this, and we develop preliminary software support via a codesign in order to leverage the accelerator through TensorFlow, a dominant high-level programming model. Our framework takes care of tiling and scheduling of neural network layers and generates necessary low-level commands to execute the CNN. Experimental evaluation on a real system with a PCI-express based Xilinx VC709 board demonstrates the effectiveness of our approach. As a result of an effective interconnection, the design maintains a high frequency when we scale the number of PEs. The sustained performance overall is a good fraction of the accelerator's theoretical peak performance.

View on arXiv
Comments on this paper