ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.05865
  4. Cited By
Flag Aggregator: Scalable Distributed Training under Failures and
  Augmented Losses using Convex Optimization
v1v2 (latest)

Flag Aggregator: Scalable Distributed Training under Failures and Augmented Losses using Convex Optimization

International Conference on Learning Representations (ICLR), 2023
12 February 2023
Hamidreza Almasi
Harshit Mishra
Balajee Vamanan
Sathya Ravi
    FedML
ArXiv (abs)PDFHTMLGithub (1★)

Papers citing "Flag Aggregator: Scalable Distributed Training under Failures and Augmented Losses using Convex Optimization"

1 / 1 papers shown
Byzantine Fault-Tolerant Distributed Machine Learning Using Stochastic
  Gradient Descent (SGD) and Norm-Based Comparative Gradient Elimination (CGE)
Byzantine Fault-Tolerant Distributed Machine Learning Using Stochastic Gradient Descent (SGD) and Norm-Based Comparative Gradient Elimination (CGE)
Nirupam Gupta
Shuo Liu
Nitin H. Vaidya
FedML
285
11
0
11 Aug 2020
1