23
1

Pruning One More Token is Enough: Leveraging Latency-Workload Non-Linearities for Vision Transformers on the Edge

Abstract

This paper investigates how to efficiently deploy vision transformers on edge devices. Recent methods reduce the latency of transformer neural networks by removing or merging tokens, with small accuracy degradation. However, these methods are not designed with edge device deployment in mind: jthey do not leverage information about the latency vs. workload trends to improve efficiency. First, we show the latency-workload size relationship is nonlinear for certain workload sizes. We consider this relationship to create a token pruning schedule. Second, we demonstrate a training-free, token pruning method utilizing this schedule. We show that for single batch inference, other methods increase latency by 18.6-30.3% with respect to baseline, while we can reduce it by 9%. For similar latency (within 5.2%) across devices we achieve 78.6%-84.5% ImageNet1K accuracy, while the state-of-the-art, Token Merging, achieves 45.8%-85.4%.

View on arXiv
Comments on this paper