386

The Need for Speed: Pruning Transformers with One Recipe

Konstantinos N. Plataniotis
Abstract

We introduce the O\textbf{O}ne-shot P\textbf{P}runing T\textbf{T}echnique for I\textbf{I}nterchangeable N\textbf{N}etworks (OPTIN\textbf{OPTIN}) framework as a tool to increase the efficiency of pre-trained transformer architectures without requiring re-training\textit{without requiring re-training}. Recent works have explored improving transformer efficiency, however often incur computationally expensive re-training procedures or depend on architecture-specific characteristics, thus impeding practical wide-scale adoption. To address these shortcomings, the OPTIN framework leverages intermediate feature distillation, capturing the long-range dependencies of model parameters (coined trajectory\textit{trajectory}), to produce state-of-the-art results on natural language, image classification, transfer learning, and semantic segmentation tasks without re-training\textit{without re-training}. Given a FLOP constraint, the OPTIN framework will compress the network while maintaining competitive accuracy performance and improved throughput. Particularly, we show a 2\leq 2% accuracy degradation from NLP baselines and a 0.50.5% improvement from state-of-the-art methods on image classification at competitive FLOPs reductions. We further demonstrate the generalization of tasks and architecture with comparative performance using Mask2Former for semantic segmentation and cnn-style networks. OPTIN presents one of the first one-shot efficient frameworks for compressing transformer architectures that generalizes well across different class domains, in particular: natural language and image-related tasks, without re-training\textit{re-training}.

View on arXiv
Comments on this paper