ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.10423
  4. Cited By
Quantization Avoids Saddle Points in Distributed Optimization

Quantization Avoids Saddle Points in Distributed Optimization

15 March 2024
Yanan Bo
Yongqiang Wang
    MQ
ArXiv (abs)PDFHTML

Papers citing "Quantization Avoids Saddle Points in Distributed Optimization"

2 / 2 papers shown
Escaping Saddle Points via Curvature-Calibrated Perturbations: A Complete Analysis with Explicit Constants and Empirical Validation
Escaping Saddle Points via Curvature-Calibrated Perturbations: A Complete Analysis with Explicit Constants and Empirical Validation
Faruk Alpay
Hamdi Alakkad
144
0
0
22 Aug 2025
Locally Differentially Private Gradient Tracking for Distributed Online
  Learning over Directed Graphs
Locally Differentially Private Gradient Tracking for Distributed Online Learning over Directed GraphsIEEE Transactions on Automatic Control (TAC), 2023
Ziqin Chen
Yongqiang Wang
FedML
211
6
0
24 Oct 2023
1