Ternarization of Vision Language Models for use on edge devices

Abstract
We propose a process to compress a pre-trained Vision Language Model into a ternary version of itself instead of training a ternary model from scratch. A new initialization scheme from pre-trained weights based on the k-means algorithm is proposed to reduce the ternarization time. We implement different custom operators for executing the ternary model on the TensorFlow Lite Engine. We compare the original model with its ternary and binary versions in terms of memory consumption, inference speed and perplexity. We find that the ternary model using our custom ternary matrix multiplication operator provides a good compromise in term of memory usage and perplexity, while having the fastest token generation speed.
View on arXiv@article{crulis2025_2504.06298, title={ Ternarization of Vision Language Models for use on edge devices }, author={ Ben Crulis and Cyril De Runz and Barthelemy Serres and Gilles Venturini }, journal={arXiv preprint arXiv:2504.06298}, year={ 2025 } }
Comments on this paper