Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes

Structured pruning is a promising approach to create smaller, faster LLMs. However, existing methods typically rely on backward passes, which can inflate memory requirements and compute costs. In this work we introduce Bonsai, a gradient-free structured pruning method that eliminates the need for backpropagation, significantly reducing memory requirements and compute costs while achieving state-of-the-art pruning performance. Bonsai uses forward-pass-only perturbative pruning to enable efficient compression of large models on a broader range of hardware configurations. Unlike existing structured pruning approaches, Bonsai not only achieves better compression with fewer resources, but also produces models that are twice as fast as those generated by semi-structured pruning. As a concrete demonstration, we use Bonsai to prune an 8B LLaMA-3 model to 50% sparsity on a single A6000 GPU -- a task infeasible with backprop-based methods, which require 2-3x memory. Our results show that removing backprop as a requirement not only enables pruning larger models on constrained hardware but can also lead to state-of-the-art efficiency and performance.
View on arXiv@article{dery2025_2402.05406, title={ Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes }, author={ Lucio Dery and Steven Kolawole and Jean-François Kagy and Virginia Smith and Graham Neubig and Ameet Talwalkar }, journal={arXiv preprint arXiv:2402.05406}, year={ 2025 } }