ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.17517
  4. Cited By
WASH: Train your Ensemble with Communication-Efficient Weight Shuffling,
  then Average

WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average

27 May 2024
Louis Fournier
Adel Nabli
Masih Aminbeidokhti
M. Pedersoli
Eugene Belilovsky
Edouard Oyallon
    MoMe
    FedML
ArXivPDFHTML

Papers citing "WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average"

4 / 4 papers shown
Title
Diverse Weight Averaging for Out-of-Distribution Generalization
Diverse Weight Averaging for Out-of-Distribution Generalization
Alexandre Ramé
Matthieu Kirchmeyer
Thibaud Rahier
A. Rakotomamonjy
Patrick Gallinari
Matthieu Cord
OOD
188
128
0
19 May 2022
Federated Dropout -- A Simple Approach for Enabling Federated Learning
  on Resource Constrained Devices
Federated Dropout -- A Simple Approach for Enabling Federated Learning on Resource Constrained Devices
Dingzhu Wen
Ki-Jun Jeon
Kaibin Huang
FedML
66
88
0
30 Sep 2021
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
268
5,635
0
05 Dec 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
247
9,042
0
06 Jun 2015
1