ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.24014
42
0

Optimization of Layer Skipping and Frequency Scaling for Convolutional Neural Networks under Latency Constraint

31 March 2025
Minh David Thao Chan
Ruoyu Zhao
Yukuan Jia
Ruiqing Mao
Sheng Zhou
ArXivPDFHTML
Abstract

The energy consumption of Convolutional Neural Networks (CNNs) is a critical factor in deploying deep learning models on resource-limited equipment such as mobile devices and autonomous vehicles. We propose an approach involving Proportional Layer Skipping (PLS) and Frequency Scaling (FS). Layer skipping reduces computational complexity by selectively bypassing network layers, whereas frequency scaling adjusts the frequency of the processor to optimize energy use under latency constraints. Experiments of PLS and FS on ResNet-152 with the CIFAR-10 dataset demonstrated significant reductions in computational demands and energy consumption with minimal accuracy loss. This study offers practical solutions for improving real-time processing in resource-limited settings and provides insights into balancing computational efficiency and model performance.

View on arXiv
@article{chan2025_2503.24014,
  title={ Optimization of Layer Skipping and Frequency Scaling for Convolutional Neural Networks under Latency Constraint },
  author={ Minh David Thao Chan and Ruoyu Zhao and Yukuan Jia and Ruiqing Mao and Sheng Zhou },
  journal={arXiv preprint arXiv:2503.24014},
  year={ 2025 }
}
Comments on this paper