572

BinaryBERT: Pushing the Limit of BERT Quantization

Annual Meeting of the Association for Computational Linguistics (ACL), 2020
Lifeng Shang
Xin Jiang
Qun Liu
Michael Lyu
Irwin King
Abstract

The rapid development of large pre-trained language models has greatly increased the demand for model compression techniques, among which quantization is a popular solution. In this paper, we propose BinaryBERT, which pushes BERT quantization to the limit with weight binarization. We find that a binary BERT is hard to be trained directly than a ternary counterpart due to its complex and irregular loss landscapes. Therefore, we propose ternary weight splitting, which initializes the binary model by equivalent splitting from a half-sized ternary network. The binary model thus inherits the good performance of the ternary model, and can be further enhanced by fine-tuning the new architecture after splitting. Empirical results show that BinaryBERT has negligible performance drop compared to the full-precision BERT-base while being 24×24\times smaller, achieving the state-of-the-art results on GLUE and SQuAD benchmarks.

View on arXiv
Comments on this paper