ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.05891
  4. Cited By
Anchor Sampling for Federated Learning with Partial Client Participation

Anchor Sampling for Federated Learning with Partial Client Participation

13 June 2022
Feijie Wu
Song Guo
Zhihao Qu
Shiqi He
Ziming Liu
Jing Gao
    FedML
ArXivPDFHTML

Papers citing "Anchor Sampling for Federated Learning with Partial Client Participation"

11 / 11 papers shown
Title
FedFetch: Faster Federated Learning with Adaptive Downstream Prefetching
FedFetch: Faster Federated Learning with Adaptive Downstream Prefetching
Qifan Yan
Andrew Liu
Shiqi He
Mathias Lécuyer
Ivan Beschastnikh
FedML
36
0
0
21 Apr 2025
Scalable Decentralized Learning with Teleportation
Scalable Decentralized Learning with Teleportation
Yuki Takezawa
Sebastian U. Stich
54
1
0
25 Jan 2025
Hierarchical Federated Learning with Multi-Timescale Gradient Correction
Hierarchical Federated Learning with Multi-Timescale Gradient Correction
Wenzhi Fang
Dong-Jun Han
Evan Chen
Shiqiang Wang
Christopher G. Brinton
24
4
0
27 Sep 2024
Byzantine-resilient Federated Learning Employing Normalized Gradients on
  Non-IID Datasets
Byzantine-resilient Federated Learning Employing Normalized Gradients on Non-IID Datasets
Shiyuan Zuo
Xingrun Yan
Rongfei Fan
Li Shen
Puning Zhao
Jie Xu
Han Hu
FedML
36
1
0
18 Aug 2024
FIARSE: Model-Heterogeneous Federated Learning via Importance-Aware
  Submodel Extraction
FIARSE: Model-Heterogeneous Federated Learning via Importance-Aware Submodel Extraction
Feijie Wu
Xingchen Wang
Yaqing Wang
Tianci Liu
Lu Su
Jing Gao
FedML
43
3
0
28 Jul 2024
FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model
FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model
Feijie Wu
Zitao Li
Yaliang Li
Bolin Ding
Jing Gao
29
41
0
25 Jun 2024
DASHA: Distributed Nonconvex Optimization with Communication
  Compression, Optimal Oracle Complexity, and No Client Synchronization
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization
A. Tyurin
Peter Richtárik
37
17
0
02 Feb 2022
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
55
30
0
13 Feb 2020
Stochastic Nonconvex Optimization with Large Minibatches
Stochastic Nonconvex Optimization with Large Minibatches
Weiran Wang
Nathan Srebro
34
26
0
25 Sep 2017
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
119
1,198
0
16 Aug 2016
1