ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.12557
62
9
v1v2v3 (latest)

MPC-enabled Privacy-Preserving Neural Network Training against Malicious Attack

24 July 2020
Ziyao Liu
Ivan Tjuawinata
C. Xing
K. Lam
ArXiv (abs)PDFHTML
Abstract

In the past decades, the application of secure multiparty computation (MPC) to machine learning, especially privacy-preserving neural network training, has attracted tremendous attention from both academia and industry. MPC enables several data owners to jointly train a neural network while preserving their data privacy. However, most previous works focus on semi-honest threat model which cannot withstand fraudulent messages sent by malicious participants. In this work, we propose a construction of efficient nnn-party protocols for secure neural network training that can secure the privacy of all honest participants even when a majority of the parties are malicious. Compared to the other designs that provides semi-honest security in a dishonest majority setting, our actively secured neural network training incurs affordable efficiency overheads. In addition, we propose a scheme to allow additive shares defined over an integer ring ZN\mathbb{Z}_NZN​ to be securely converted to additive shares over a finite field ZQ\mathbb{Z}_QZQ​. This conversion scheme is essential in correctly converting shared Beaver triples in order to make the values generated in preprocessing phase to be usable in online phase, which may be of independent interest.

View on arXiv
Comments on this paper