ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.11965
  4. Cited By
Provably Efficient Primal-Dual Reinforcement Learning for CMDPs with
  Non-stationary Objectives and Constraints
v1v2v3v4 (latest)

Provably Efficient Primal-Dual Reinforcement Learning for CMDPs with Non-stationary Objectives and Constraints

28 January 2022
Yuhao Ding
Javad Lavaei
ArXiv (abs)PDFHTML

Papers citing "Provably Efficient Primal-Dual Reinforcement Learning for CMDPs with Non-stationary Objectives and Constraints"

10 / 10 papers shown
No-Regret Learning Under Adversarial Resource Constraints: A Spending Plan Is All You Need!
No-Regret Learning Under Adversarial Resource Constraints: A Spending Plan Is All You Need!
Francesco Emanuele Stradi
Matteo Castiglioni
A. Marchesi
N. Gatti
Christian Kroer
458
2
0
16 Jun 2025
Data-Dependent Regret Bounds for Constrained MABs
Data-Dependent Regret Bounds for Constrained MABs
Gianmarco Genalti
Francesco Emanuele Stradi
Matteo Castiglioni
A. Marchesi
N. Gatti
399
0
0
26 May 2025
Optimal Strong Regret and Violation in Constrained MDPs via Policy
  Optimization
Optimal Strong Regret and Violation in Constrained MDPs via Policy Optimization
Francesco Emanuele Stradi
Matteo Castiglioni
A. Marchesi
Nicola Gatti
174
5
0
03 Oct 2024
Pausing Policy Learning in Non-stationary Reinforcement Learning
Pausing Policy Learning in Non-stationary Reinforcement Learning
Hyunin Lee
Ming Jin
Javad Lavaei
Somayeh Sojoudi
OffRL
222
2
0
25 May 2024
Learning Adversarial MDPs with Stochastic Hard Constraints
Learning Adversarial MDPs with Stochastic Hard Constraints
Francesco Emanuele Stradi
Matteo Castiglioni
A. Marchesi
Nicola Gatti
453
11
0
06 Mar 2024
Gradient Shaping for Multi-Constraint Safe Reinforcement Learning
Gradient Shaping for Multi-Constraint Safe Reinforcement Learning
Yi-Fan Yao
Zuxin Liu
Zhepeng Cen
Peide Huang
Tingnan Zhang
Wenhao Yu
Ding Zhao
OffRL
353
11
0
23 Dec 2023
Constraint-Conditioned Policy Optimization for Versatile Safe
  Reinforcement Learning
Constraint-Conditioned Policy Optimization for Versatile Safe Reinforcement LearningNeural Information Processing Systems (NeurIPS), 2023
Yi-Fan Yao
Zuxin Liu
Zhepeng Cen
Jiacheng Zhu
Wenhao Yu
Tingnan Zhang
Ding Zhao
OffRL
251
19
0
05 Oct 2023
Provably Efficient Model-Free Constrained RL with Linear Function
  Approximation
Provably Efficient Model-Free Constrained RL with Linear Function ApproximationNeural Information Processing Systems (NeurIPS), 2022
A. Ghosh
Xingyu Zhou
Ness B. Shroff
359
33
0
23 Jun 2022
Near-Optimal Goal-Oriented Reinforcement Learning in Non-Stationary
  Environments
Near-Optimal Goal-Oriented Reinforcement Learning in Non-Stationary EnvironmentsNeural Information Processing Systems (NeurIPS), 2022
Liyu Chen
Haipeng Luo
299
8
0
25 May 2022
Nonstationary Reinforcement Learning with Linear Function Approximation
Nonstationary Reinforcement Learning with Linear Function Approximation
Huozhi Zhou
Jinglin Chen
Lav Varshney
A. Jagmohan
334
31
0
08 Oct 2020
1
Page 1 of 1