ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.17761
  4. Cited By
Double Variance Reduction: A Smoothing Trick for Composite Optimization
  Problems without First-Order Gradient

Double Variance Reduction: A Smoothing Trick for Composite Optimization Problems without First-Order Gradient

28 May 2024
Hao Di
Haishan Ye
Yueling Zhang
Xiangyu Chang
Guang Dai
Ivor W. Tsang
ArXivPDFHTML

Papers citing "Double Variance Reduction: A Smoothing Trick for Composite Optimization Problems without First-Order Gradient"

2 / 2 papers shown
Title
Poor Man's Training on MCUs: A Memory-Efficient Quantized
  Back-Propagation-Free Approach
Poor Man's Training on MCUs: A Memory-Efficient Quantized Back-Propagation-Free Approach
Yequan Zhao
Hai Li
Ian Young
Zheng-Wei Zhang
MQ
24
2
0
07 Nov 2024
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
76
736
0
19 Mar 2014
1