ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.13415
18
0

Shavette: Low Power Neural Network Acceleration via Algorithm-level Error Detection and Undervolting

17 October 2024
Mikael Rinkinen
Lauri Koskinen
O. Silvén
Mehdi Safarpour
ArXivPDFHTML
Abstract

Reduced voltage operation is an effective technique for substantial energy efficiency improvement in digital circuits. This brief introduces a simple approach for enabling reduced voltage operation of Deep Neural Network (DNN) accelerators by mere software modifications. Conventional approaches for enabling reduced voltage operation e.g., Timing Error Detection (TED) systems, incur significant development costs and overheads, while not being applicable to the off-the-shelf components. Contrary to those, the solution proposed in this paper relies on algorithm-based error detection, and hence, is implemented with low development costs, does not require any circuit modifications, and is even applicable to commodity devices. By showcasing the solution through experimenting on popular DNNs, i.e., LeNet and VGG16, on a GPU platform, we demonstrate 18% to 25% energy saving with no accuracy loss of the models and negligible throughput compromise (< 3.9%), considering the overheads from integration of the error detection schemes into the DNN. The integration of presented algorithmic solution into the design is simpler when compared conventional TED based techniques that require extensive circuit-level modifications, cell library characterizations or special support from the design tools.

View on arXiv
@article{rinkinen2025_2410.13415,
  title={ Shavette: Low Power Neural Network Acceleration via Algorithm-level Error Detection and Undervolting },
  author={ Mikael Rinkinen and Lauri Koskinen and Olli Silven and Mehdi Safarpour },
  journal={arXiv preprint arXiv:2410.13415},
  year={ 2025 }
}
Comments on this paper