111

Secure and Efficient LpL^p-Norm Computation for Two-Party Learning Applications

Main:11 Pages
4 Figures
Bibliography:2 Pages
Abstract

Secure norm computation is becoming increasingly important in many real-world learning applications. However, existing cryptographic systems often lack a general framework for securely computing the LpL^p-norm over private inputs held by different parties. These systems often treat secure norm computation as a black-box process, neglecting to design tailored cryptographic protocols that optimize performance. Moreover, they predominantly focus on the L2L^2-norm, paying little attention to other popular LpL^p-norms, such as L1L^1 and LL^\infty, which are commonly used in practice, such as machine learning tasks and location-based services.To our best knowledge, we propose the first comprehensive framework for secure two-party LpL^p-norm computations (L1L^1, L2L^2, and LL^\infty), denoted as \mbox{Crypto-LpL^p}, designed to be versatile across various applications. We have designed, implemented, and thoroughly evaluated our framework across a wide range of benchmarking applications, state-of-the-art (SOTA) cryptographic protocols, and real-world datasets to validate its effectiveness and practical applicability. In summary, \mbox{Crypto-LpL^p} outperforms prior works on secure LpL^p-norm computation, achieving 82×82\times, 271×271\times, and 42×42\times improvements in runtime while reducing communication overhead by 36×36\times, 4×4\times, and 21×21\times for p=1p=1, 22, and \infty, respectively. Furthermore, we take the first step in adapting our Crypto-LpL^p framework for secure machine learning inference, reducing communication costs by 3×3\times compared to SOTA systems while maintaining comparable runtime and accuracy.

View on arXiv
Comments on this paper