ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.13088
14
0

Combining Relevance and Magnitude for Resource-Aware DNN Pruning

21 May 2024
C. Chiasserini
F. Malandrino
Nuria Molner
Zhiqiang Zhao
ArXivPDFHTML
Abstract

Pruning neural networks, i.e., removing some of their parameters whilst retaining their accuracy, is one of the main ways to reduce the latency of a machine learning pipeline, especially in resource- and/or bandwidth-constrained scenarios. In this context, the pruning technique, i.e., how to choose the parameters to remove, is critical to the system performance. In this paper, we propose a novel pruning approach, called FlexRel and predicated upon combining training-time and inference-time information, namely, parameter magnitude and relevance, in order to improve the resulting accuracy whilst saving both computational resources and bandwidth. Our performance evaluation shows that FlexRel is able to achieve higher pruning factors, saving over 35% bandwidth for typical accuracy targets.

View on arXiv
@article{chiasserini2025_2405.13088,
  title={ Combining Relevance and Magnitude for Resource-Aware DNN Pruning },
  author={ Carla Fabiana Chiasserini and Francesco Malandrino and Nuria Molner and Zhiqiang Zhao },
  journal={arXiv preprint arXiv:2405.13088},
  year={ 2025 }
}
Comments on this paper