ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.17517
  4. Cited By
WASH: Train your Ensemble with Communication-Efficient Weight Shuffling,
  then Average

WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average

27 May 2024
Louis Fournier
Adel Nabli
Masih Aminbeidokhti
M. Pedersoli
Eugene Belilovsky
Edouard Oyallon
    MoMeFedML
ArXiv (abs)PDFHTML

Papers citing "WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average"

4 / 4 papers shown
Title
Communication Efficient LLM Pre-training with SparseLoCo
Communication Efficient LLM Pre-training with SparseLoCo
Amir Sarfi
Benjamin Thérien
Joel Lidin
Eugene Belilovsky
72
1
0
21 Aug 2025
Communication-Efficient Distributed Training for Collaborative Flat Optima Recovery in Deep Learning
Communication-Efficient Distributed Training for Collaborative Flat Optima Recovery in Deep Learning
Tolga Dimlioglu
A. Choromańska
FedML
214
1
0
27 Jul 2025
Model Parallelism With Subnetwork Data Parallelism
Model Parallelism With Subnetwork Data Parallelism
Vaibhav Singh
Zafir Khalid
Edouard Oyallon
Eugene Belilovsky
269
1
0
11 Jul 2025
Communication-Efficient Language Model Training Scales Reliably and Robustly: Scaling Laws for DiLoCo
Zachary B. Charles
Gabriel Teston
Lucio Dery
Keith Rush
Nova Fallen
Zachary Garrett
Arthur Szlam
Arthur Douillard
834
12
0
12 Mar 2025
1