ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.00213
  4. Cited By
Finding the Near Optimal Policy via Adaptive Reduced Regularization in
  MDPs

Finding the Near Optimal Policy via Adaptive Reduced Regularization in MDPs

31 October 2020
Wenhao Yang
Xiang Li
Guangzeng Xie
Zhihua Zhang
ArXiv (abs)PDFHTML

Papers citing "Finding the Near Optimal Policy via Adaptive Reduced Regularization in MDPs"

3 / 3 papers shown
The Power of Regularization in Solving Extensive-Form Games
The Power of Regularization in Solving Extensive-Form GamesInternational Conference on Learning Representations (ICLR), 2022
Ming-Yuan Liu
Asuman Ozdaglar
Tiancheng Yu
Jianchao Tan
373
27
0
19 Jun 2022
Fast Policy Extragradient Methods for Competitive Games with Entropy
  Regularization
Fast Policy Extragradient Methods for Competitive Games with Entropy RegularizationNeural Information Processing Systems (NeurIPS), 2021
Shicong Cen
Yuting Wei
Yuejie Chi
447
93
0
31 May 2021
Softmax Policy Gradient Methods Can Take Exponential Time to Converge
Softmax Policy Gradient Methods Can Take Exponential Time to ConvergeMathematical programming (Math. Program.), 2021
Gen Li
Yuting Wei
Yuejie Chi
Yuxin Chen
362
57
0
22 Feb 2021
1
Page 1 of 1