LSAQ: Layer-Specific Adaptive Quantization for Large Language Model Deployment

As Large Language Models (LLMs) demonstrate exceptional performance across various domains, deploying LLMs on edge devices has emerged as a new trend. Quantization techniques, which reduce the size and memory requirements of LLMs, are effective for deploying LLMs on resource-limited edge devices. However, existing one-size-fits-all quantization methods often fail to dynamically adjust the memory requirements of LLMs, limiting their applications to practical edge devices with various computation resources. To tackle this issue, we propose Layer-Specific Adaptive Quantization (LSAQ), a system for adaptive quantization and dynamic deployment of LLMs based on layer importance. Specifically, LSAQ evaluates the importance of LLMs' neural layers by constructing top-k token sets from the inputs and outputs of each layer and calculating their Jaccard similarity. Based on layer importance, our system adaptively adjusts quantization strategies in real time according to the computation resource of edge devices, which applies higher quantization precision to layers with higher importance, and vice versa. {Experimental results show that LSAQ consistently outperforms the selected quantization baselines in terms of perplexity and zero-shot tasks. Additionally, it can devise appropriate quantization schemes for different usage scenarios to facilitate the deployment of LLMs.
View on arXiv@article{zeng2025_2412.18135, title={ LSAQ: Layer-Specific Adaptive Quantization for Large Language Model Deployment }, author={ Binrui Zeng and Bin Ji and Xiaodong Liu and Jie Yu and Shasha Li and Jun Ma and Xiaopeng Li and Shangwen Wang and Xinran Hong and Yongtao Tang }, journal={arXiv preprint arXiv:2412.18135}, year={ 2025 } }