ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.05387
  4. Cited By
Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point
  Evasion

Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion

12 August 2020
Brian Swenson
Ryan W. Murray
H. Vincent Poor
S. Kar
ArXivPDFHTML

Papers citing "Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion"

4 / 4 papers shown
Title
Peer-to-Peer Learning Dynamics of Wide Neural Networks
Peer-to-Peer Learning Dynamics of Wide Neural Networks
Shreyas Chaudhari
Srinivasa Pranav
Emile Anand
José M. F. Moura
42
3
0
23 Sep 2024
Networked Signal and Information Processing
Networked Signal and Information Processing
Stefan Vlaski
S. Kar
Ali H. Sayed
José M. F. Moura
49
16
0
25 Oct 2022
Understanding A Class of Decentralized and Federated Optimization
  Algorithms: A Multi-Rate Feedback Control Perspective
Understanding A Class of Decentralized and Federated Optimization Algorithms: A Multi-Rate Feedback Control Perspective
Xinwei Zhang
Mingyi Hong
N. Elia
FedML
21
3
0
27 Apr 2022
Decentralized Frank-Wolfe Algorithm for Convex and Non-convex Problems
Decentralized Frank-Wolfe Algorithm for Convex and Non-convex Problems
Hoi-To Wai
Jean Lafond
Anna Scaglione
Eric Moulines
69
90
0
05 Dec 2016
1