ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.08002
16
1

ApproxDARTS: Differentiable Neural Architecture Search with Approximate Multipliers

8 April 2024
Michal Pinos
Lukás Sekanina
Vojtěch Mrázek
    MQ
ArXivPDFHTML
Abstract

Integrating the principles of approximate computing into the design of hardware-aware deep neural networks (DNN) has led to DNNs implementations showing good output quality and highly optimized hardware parameters such as low latency or inference energy. In this work, we present ApproxDARTS, a neural architecture search (NAS) method enabling the popular differentiable neural architecture search method called DARTS to exploit approximate multipliers and thus reduce the power consumption of generated neural networks. We showed on the CIFAR-10 data set that the ApproxDARTS is able to perform a complete architecture search within less than 101010 GPU hours and produce competitive convolutional neural networks (CNN) containing approximate multipliers in convolutional layers. For example, ApproxDARTS created a CNN showing an energy consumption reduction of (a) 53.84%53.84\%53.84% in the arithmetic operations of the inference phase compared to the CNN utilizing the native 323232-bit floating-point multipliers and (b) 5.97%5.97\%5.97% compared to the CNN utilizing the exact 888-bit fixed-point multipliers, in both cases with a negligible accuracy drop. Moreover, the ApproxDARTS is 2.3×2.3\times2.3× faster than a similar but evolutionary algorithm-based method called EvoApproxNAS.

View on arXiv
Comments on this paper