ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.08371
13
21

Sketching for First Order Method: Efficient Algorithm for Low-Bandwidth Channel and Vulnerability

15 October 2022
Zhao-quan Song
Yitan Wang
Zheng Yu
Licheng Zhang
    FedML
ArXivPDFHTML
Abstract

Sketching is one of the most fundamental tools in large-scale machine learning. It enables runtime and memory saving via randomly compressing the original large problem into lower dimensions. In this paper, we propose a novel sketching scheme for the first order method in large-scale distributed learning setting, such that the communication costs between distributed agents are saved while the convergence of the algorithms is still guaranteed. Given gradient information in a high dimension ddd, the agent passes the compressed information processed by a sketching matrix R∈Rs×dR\in \mathbb{R}^{s\times d}R∈Rs×d with s≪ds\ll ds≪d, and the receiver de-compressed via the de-sketching matrix R⊤R^\topR⊤ to ``recover'' the information in original dimension. Using such a framework, we develop algorithms for federated learning with lower communication costs. However, such random sketching does not protect the privacy of local data directly. We show that the gradient leakage problem still exists after applying the sketching technique by presenting a specific gradient attack method. As a remedy, we prove rigorously that the algorithm will be differentially private by adding additional random noises in gradient information, which results in a both communication-efficient and differentially private first order approach for federated learning tasks. Our sketching scheme can be further generalized to other learning settings and might be of independent interest itself.

View on arXiv
Comments on this paper