ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.03914
  4. Cited By
Federated Random Reshuffling with Compression and Variance Reduction

Federated Random Reshuffling with Compression and Variance Reduction

8 May 2022
Grigory Malinovsky
Peter Richtárik
    FedML
ArXivPDFHTML

Papers citing "Federated Random Reshuffling with Compression and Variance Reduction"

10 / 10 papers shown
Title
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical
  Framework for Low-Rank Adaptation
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptation
Grigory Malinovsky
Umberto Michieli
Hasan Hammoud
Taha Ceritli
Hayder Elesedy
Mete Ozay
Peter Richtárik
AI4CE
20
1
0
10 Oct 2024
On the Last-Iterate Convergence of Shuffling Gradient Methods
On the Last-Iterate Convergence of Shuffling Gradient Methods
Zijian Liu
Zhengyuan Zhou
21
2
0
12 Mar 2024
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization
  with Loopless Variance Reduction
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction
Yury Demidovich
Grigory Malinovsky
Peter Richtárik
50
2
0
11 Mar 2024
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
A. Maranjyan
Peter Richtárik
34
3
0
07 Mar 2024
Towards a Better Theoretical Understanding of Independent Subnetwork
  Training
Towards a Better Theoretical Understanding of Independent Subnetwork Training
Egor Shulgin
Peter Richtárik
AI4CE
16
6
0
28 Jun 2023
Improving Accelerated Federated Learning with Compression and Importance
  Sampling
Improving Accelerated Federated Learning with Compression and Importance Sampling
Michal Grudzieñ
Grigory Malinovsky
Peter Richtárik
FedML
19
8
0
05 Jun 2023
TAMUNA: Doubly Accelerated Distributed Optimization with Local Training,
  Compression, and Partial Participation
TAMUNA: Doubly Accelerated Distributed Optimization with Local Training, Compression, and Partial Participation
Laurent Condat
Ivan Agarský
Grigory Malinovsky
Peter Richtárik
FedML
11
4
0
20 Feb 2023
Federated Learning with Regularized Client Participation
Federated Learning with Regularized Client Participation
Grigory Malinovsky
Samuel Horváth
Konstantin Burlachenko
Peter Richtárik
FedML
10
13
0
07 Feb 2023
Federated Optimization Algorithms with Random Reshuffling and Gradient
  Compression
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
Abdurakhmon Sadiev
Grigory Malinovsky
Eduard A. Gorbunov
Igor Sokolov
Ahmed Khaled
Konstantin Burlachenko
Peter Richtárik
FedML
11
21
0
14 Jun 2022
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with
  Linear Convergence Rate
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
Aryan Mokhtari
Mert Gurbuzbalaban
Alejandro Ribeiro
22
36
0
01 Nov 2016
1