344

Trustworthiness of Stochastic Gradient Descent in Distributed Learning

European Control Conference (ECC), 2024
Main:10 Pages
1 Figures
Bibliography:1 Pages
Appendix:2 Pages
Abstract

Distributed learning (DL) uses multiple nodes to accelerate training, enabling efficient optimization of large-scale models. Stochastic Gradient Descent (SGD), a key optimization algorithm, plays a central role in this process. However, communication bottlenecks often limit scalability and efficiency, leading to increasing adoption of compressed SGD techniques to alleviate these challenges. Despite addressing communication overheads, compressed SGD introduces trustworthiness concerns, as gradient exchanges among nodes are vulnerable to attacks like gradient inversion (GradInv) and membership inference attacks (MIA). The trustworthiness of compressed SGD remains unexplored, leaving important questions about its reliability unanswered.

View on arXiv
Comments on this paper