ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.03594
  4. Cited By
Proximal Online Gradient is Optimum for Dynamic Regret
v1v2v3v4v5v6 (latest)

Proximal Online Gradient is Optimum for Dynamic Regret

8 October 2018
Yawei Zhao
Shuang Qiu
Ji Liu
ArXiv (abs)PDFHTML

Papers citing "Proximal Online Gradient is Optimum for Dynamic Regret"

3 / 3 papers shown
Title
Learning Large DAGs by Combining Continuous Optimization and Feedback
  Arc Set Heuristics
Learning Large DAGs by Combining Continuous Optimization and Feedback Arc Set Heuristics
P. Gillot
P. Parviainen
CMLBDL
36
3
0
01 Jul 2021
Dynamic Regret of Policy Optimization in Non-stationary Environments
Dynamic Regret of Policy Optimization in Non-stationary Environments
Yingjie Fei
Zhuoran Yang
Zhaoran Wang
Qiaomin Xie
93
56
0
30 Jun 2020
Decentralized Online Learning: Take Benefits from Others' Data without
  Sharing Your Own to Track Global Trend
Decentralized Online Learning: Take Benefits from Others' Data without Sharing Your Own to Track Global Trend
Wendi Wu
Zongren Li
Yawei Zhao
Chenkai Yu
P. Zhao
Ji Liu
FedML
101
17
0
29 Jan 2019
1