ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.00958
48
70

Secure Bilevel Asynchronous Vertical Federated Learning with Backward Updating

1 March 2021
Qingsong Zhang
Bin Gu
Cheng Deng
Heng-Chiao Huang
    FedML
ArXiv (abs)PDFHTML
Abstract

Vertical federated learning (VFL) attracts increasing attention due to the emerging demands of multi-party collaborative modeling and concerns of privacy leakage. In the real VFL applications, usually only one or partial parties hold labels, which makes it challenging for all parties to collaboratively learn the model without privacy leakage. Meanwhile, most existing VFL algorithms are trapped in the synchronous computations, which leads to inefficiency in their real-world applications. To address these challenging problems, we propose a novel {\bf VF}L framework integrated with new {\bf b}ackward updating mechanism and {\bf b}ilevel asynchronous parallel architecture (VF{B2{\textbf{B}}^2B2}), under which three new algorithms, including VF{B2{\textbf{B}}^2B2}-SGD, -SVRG, and -SAGA, are proposed. We derive the theoretical results of the convergence rates of these three algorithms under both strongly convex and nonconvex conditions. We also prove the security of VF{B2{\textbf{B}}^2B2} under semi-honest threat models. Extensive experiments on benchmark datasets demonstrate that our algorithms are efficient, scalable and lossless.

View on arXiv
Comments on this paper