256
v1v2v3 (latest)

Exploiting Student Parallelism for Efficient GPU Inference of BERT-like Models in Online Services

Main:8 Pages
8 Figures
Bibliography:2 Pages
5 Tables
Appendix:3 Pages
Abstract

Due to high accuracy, BERT-like models have been widely adopted by text mining and web searching. However, large BERT-like models suffer from inefficient online inference, facing the following two problems on GPUs: (1) their high accuracy relies on the large model depth, which linearly increases the sequential computation on GPUs; (2) stochastic and dynamic online workloads cause extra costs from batching and paddings. Therefore, we present \sys for the real-world setting of GPU inference on online workloads. At its core, \sys adopts stacking distillation and boosting ensemble, distilling the original deep model into a group of shallow but virtually stacked student models running in parallel. This enables \sys to achieve a lower model depth (e.g., two layers) than the others and the lowest inference latency while maintaining accuracy. In addition, adaptive student pruning realizes dynamic student numbers according to changing online workloads. Especially for occasional workload bursts, it can temporarily decrease the student number with minimal accuracy loss to improve system throughput. We conduct comprehensive experiments to verify the effectiveness, whose results show that \sys outperforms the baselines by 4.1×1.6×4.1\times\sim 1.6\times in latency while maintaining accuracy and achieves up to 22.27×22.27\times higher throughput for workload bursts.

View on arXiv
Comments on this paper