ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.02003
8
0

Efficient Model-Based Deep Learning via Network Pruning and Fine-Tuning

3 November 2023
Chicago Park
Weijie Gan
Zihao Zou
Yuyang Hu
Zhixin Sun
Ulugbek S. Kamilov
ArXivPDFHTML
Abstract

Model-based deep learning (MBDL) is a powerful methodology for designing deep models to solve imaging inverse problems. MBDL networks can be seen as iterative algorithms that estimate the desired image using a physical measurement model and a learned image prior specified using a convolutional neural net (CNNs). The iterative nature of MBDL networks increases the test-time computational complexity, which limits their applicability in certain large-scale applications. Here we make two contributions to address this issue: First, we show how structured pruning can be adopted to reduce the number of parameters in MBDL networks. Second, we present three methods to fine-tune the pruned MBDL networks to mitigate potential performance loss. Each fine-tuning strategy has a unique benefit that depends on the presence of a pre-trained model and a high-quality ground truth. We show that our pruning and fine-tuning approach can accelerate image reconstruction using popular deep equilibrium learning (DEQ) and deep unfolding (DU) methods by 50% and 32%, respectively, with nearly no performance loss. This work thus offers a step forward for solving inverse problems by showing the potential of pruning to improve the scalability of MBDL. Code is available atthis https URL.

View on arXiv
@article{park2025_2311.02003,
  title={ Efficient Model-Based Deep Learning via Network Pruning and Fine-Tuning },
  author={ Chicago Y. Park and Weijie Gan and Zihao Zou and Yuyang Hu and Zhixin Sun and Ulugbek S. Kamilov },
  journal={arXiv preprint arXiv:2311.02003},
  year={ 2025 }
}
Comments on this paper