34
0

ALLMod: Exploring A\underline{\mathbf{A}}rea-Efficiency of L\underline{\mathbf{L}}UT-based L\underline{\mathbf{L}}arge Number Mod\underline{\mathbf{Mod}}ular Reduction via Hybrid Workloads

Abstract

Modular arithmetic, particularly modular reduction, is widely used in cryptographic applications such as homomorphic encryption (HE) and zero-knowledge proofs (ZKP). High-bit-width operations are crucial for enhancing security; however, they are computationally intensive due to the large number of modular operations required. The lookup-table-based (LUT-based) approach, a ``space-for-time'' technique, reduces computational load by segmenting the input number into smaller bit groups, pre-computing modular reduction results for each segment, and storing these results in LUTs. While effective, this method incurs significant hardware overhead due to extensive LUT usage. In this paper, we introduce ALLMod, a novel approach that improves the area efficiency of LUT-based large-number modular reduction by employing hybrid workloads. Inspired by the iterative method, ALLMod splits the bit groups into two distinct workloads, achieving lower area costs without compromising throughput. We first develop a template to facilitate workload splitting and ensure balanced distribution. Then, we conduct design space exploration to evaluate the optimal timing for fusing workload results, enabling us to identify the most efficient design under specific constraints. Extensive evaluations show that ALLMod achieves up to 1.65×1.65\times and 3×3\times improvements in area efficiency over conventional LUT-based methods for bit-widths of 128128 and 8,1928,192, respectively.

View on arXiv
@article{liu2025_2503.15916,
  title={ ALLMod: Exploring $\underline{\mathbf{A}}$rea-Efficiency of $\underline{\mathbf{L}}$UT-based $\underline{\mathbf{L}}$arge Number $\underline{\mathbf{Mod}}$ular Reduction via Hybrid Workloads },
  author={ Fangxin Liu and Haomin Li and Zongwu Wang and Bo Zhang and Mingzhe Zhang and Shoumeng Yan and Li Jiang and Haibing Guan },
  journal={arXiv preprint arXiv:2503.15916},
  year={ 2025 }
}
Comments on this paper