30
0

An Efficient Matrix Multiplication Algorithm for Accelerating Inference in Binary and Ternary Neural Networks

Abstract

Despite their tremendous success and versatility, Deep Neural Networks (DNNs) such as Large Language Models (LLMs) suffer from inference inefficiency and rely on advanced computational infrastructure. To address these challenges and make these models more accessible and cost-effective, in this paper, we propose algorithms to improve the inference time and memory efficiency of DNNs with binary and ternary weight matrices. Particularly focusing on matrix multiplication as the bottleneck operation of inference, we observe that, once trained, the weight matrices of a model no longer change. This allows us to preprocess these matrices and create indices that help reduce the storage requirements by a logarithmic factor while enabling our efficient inference algorithms. Specifically, for a n×nn\times n weight matrix, our efficient algorithm guarantees a time complexity of O(n2logn)O(\frac{n^2}{\log n}), a logarithmic factor improvement over the standard vector-matrix multiplication. Besides theoretical analysis, we conduct extensive experiments to evaluate the practical efficiency of our algorithms. Our results confirm the superiority of our approach both with respect to time and memory, as we observed a reduction in the multiplication time up to 29x and memory usage up to 6x. When applied to LLMs, our experiments show up to a 5.24x speedup in the inference time.

View on arXiv
@article{dehghankar2025_2411.06360,
  title={ An Efficient Matrix Multiplication Algorithm for Accelerating Inference in Binary and Ternary Neural Networks },
  author={ Mohsen Dehghankar and Mahdi Erfanian and Abolfazl Asudeh },
  journal={arXiv preprint arXiv:2411.06360},
  year={ 2025 }
}
Comments on this paper