ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.06127
24
0

De-VertiFL: A Solution for Decentralized Vertical Federated Learning

8 October 2024
Alberto Huertas Celdrán
Chao Feng
Sabyasachi Banik
Gérome Bovet
Gregorio Martínez Pérez
Burkhard Stiller
    FedML
ArXivPDFHTML
Abstract

Federated Learning (FL), introduced in 2016, was designed to enhance data privacy in collaborative model training environments. Among the FL paradigm, horizontal FL, where clients share the same set of features but different data samples, has been extensively studied in both centralized and decentralized settings. In contrast, Vertical Federated Learning (VFL), which is crucial in real-world decentralized scenarios where clients possess different, yet sensitive, data about the same entity, remains underexplored. Thus, this work introduces De-VertiFL, a novel solution for training models in a decentralized VFL setting. De-VertiFL contributes by introducing a new network architecture distribution, an innovative knowledge exchange scheme, and a distributed federated training process. Specifically, De-VertiFL enables the sharing of hidden layer outputs among federation clients, allowing participants to benefit from intermediate computations, thereby improving learning efficiency. De-VertiFL has been evaluated using a variety of well-known datasets, including both image and tabular data, across binary and multiclass classification tasks. The results demonstrate that De-VertiFL generally surpasses state-of-the-art methods in F1-score performance, while maintaining a decentralized and privacy-preserving framework.

View on arXiv
@article{celdrán2025_2410.06127,
  title={ De-VertiFL: A Solution for Decentralized Vertical Federated Learning },
  author={ Alberto Huertas Celdrán and Chao Feng and Sabyasachi Banik and Gerome Bovet and Gregorio Martinez Perez and Burkhard Stiller },
  journal={arXiv preprint arXiv:2410.06127},
  year={ 2025 }
}
Comments on this paper