We propose SwiftAgg+, a novel secure aggregation protocol for federated learning systems, where a central server aggregates local models of distributed users, each of size , trained on their local data, in a privacy-preserving manner. SwiftAgg+ can significantly reduce the communication overheads without any compromise on security, and achieve optimal communication loads within diminishing gaps. Specifically, in presence of at most dropout users, SwiftAgg+ achieves a per-user communication load of symbols and a server communication load of symbols, with a worst-case information-theoretic security guarantee, against any subset of up to semi-honest users who may also collude with the curious server. Moreover, the proposed SwiftAgg+ allows for a flexible trade-off between communication loads and the number of active communication links. In particular, for and for any , SwiftAgg+ can achieve the server communication load of symbols, and per-user communication load of up to symbols, where the number of pair-wise active connections in the network is .
View on arXiv