ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.13856
  4. Cited By
On the Optimal Batch Size for Byzantine-Robust Distributed Learning

On the Optimal Batch Size for Byzantine-Robust Distributed Learning

23 May 2023
Yi-Rui Yang
Chang-Wei Shi
Wu-Jun Li
    FedML
    AAML
ArXivPDFHTML

Papers citing "On the Optimal Batch Size for Byzantine-Robust Distributed Learning"

2 / 2 papers shown
Title
Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums
Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
Rafael Pinot
John Stephan
FedML
31
67
0
24 May 2022
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
278
2,888
0
15 Sep 2016
1