446

Approximate Agreement Algorithms for Byzantine Collaborative Learning

ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 2025
Main:17 Pages
5 Figures
Bibliography:3 Pages
Appendix:1 Pages
Abstract

In Byzantine collaborative learning, nn clients in a peer-to-peer network collectively learn a model without sharing their data by exchanging and aggregating stochastic gradient estimates. Byzantine clients can prevent others from collecting identical sets of gradient estimates. The aggregation step thus needs to be combined with an efficient (approximate) agreement subroutine to ensure convergence of the training process.

View on arXiv
Comments on this paper