ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.03542
  4. Cited By
Two Losses Are Better Than One: Faster Optimization Using a Cheaper
  Proxy

Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy

7 February 2023
Blake E. Woodworth
Konstantin Mishchenko
Francis R. Bach
ArXivPDFHTML

Papers citing "Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy"

8 / 8 papers shown
Title
Accelerated Methods with Compressed Communications for Distributed
  Optimization Problems under Data Similarity
Accelerated Methods with Compressed Communications for Distributed Optimization Problems under Data Similarity
Dmitry Bylinkin
Aleksandr Beznosikov
72
1
0
21 Dec 2024
SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction
  for Non-convex Cross-Device Federated Learning
SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning
Avetik G. Karagulyan
Egor Shulgin
Abdurakhmon Sadiev
Peter Richtárik
FedML
35
2
0
30 May 2024
Non-Convex Stochastic Composite Optimization with Polyak Momentum
Non-Convex Stochastic Composite Optimization with Polyak Momentum
Yuan Gao
Anton Rodomanov
Sebastian U. Stich
21
6
0
05 Mar 2024
Unified Convergence Theory of Stochastic and Variance-Reduced Cubic
  Newton Methods
Unified Convergence Theory of Stochastic and Variance-Reduced Cubic Newton Methods
El Mahdi Chayti
N. Doikov
Martin Jaggi
ODL
19
5
0
23 Feb 2023
Target-based Surrogates for Stochastic Optimization
Target-based Surrogates for Stochastic Optimization
J. Lavington
Sharan Vaswani
Reza Babanezhad
Mark W. Schmidt
Nicolas Le Roux
38
5
0
06 Feb 2023
Optimal Gradient Sliding and its Application to Distributed Optimization
  Under Similarity
Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity
D. Kovalev
Aleksandr Beznosikov
Ekaterina Borodich
Alexander Gasnikov
G. Scutari
28
12
0
30 May 2022
Incremental Majorization-Minimization Optimization with Application to
  Large-Scale Machine Learning
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
65
317
0
18 Feb 2014
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
166
683
0
07 Dec 2010
1