88

Optimal Quantization for Matrix Multiplication

International Symposium on Information Theory (ISIT), 2024
Main:1 Pages
3 Figures
Appendix:45 Pages
Abstract

Recent work in machine learning community proposed multiple methods for performing lossy compression (quantization) of large matrices. This quantization is important for accelerating matrix multiplication (main component of large language models), which is often bottlenecked by the speed of loading these matrices from memory. Unlike classical vector quantization and rate-distortion theory, the goal of these new compression algorithms is to be able to approximate not the matrices themselves, but their matrix product. Specifically, given a pair of real matrices A,BA,B an encoder (compressor) is applied to each of them independently producing descriptions with RR bits per entry. These representations subsequently are used by the decoder to estimate matrix product ABA^\top B. In this work, we provide a non-asymptotic lower bound on the mean squared error of this approximation (as a function of rate RR) for the case of matrices A,BA,B with iid Gaussian entries. Algorithmically, we construct a universal quantizer based on nested lattices with an explicit guarantee of approximation error for any (non-random) pair of matrices AA, BB in terms of only Frobenius norms AˉF,BˉF\|\bar{A}\|_F, \|\bar{B}\|_F and AˉBˉF\|\bar{A}^\top \bar{B}\|_F, where Aˉ,Bˉ\bar{A},\bar{B} are versions of A,BA,B with zero-centered columns, respectively. For iid Gaussian matrices our quantizer achieves the lower bound and is, thus, asymptotically optimal. A practical low-complexity version of our quantizer achieves performance quite close to optimal. In addition, we derive rate-distortion function for matrix multiplication of iid Gaussian matrices, which exhibits an interesting phase-transition at R0.906R\approx 0.906 bit/entry.

View on arXiv
Comments on this paper