ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.13827
8
63

Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret

22 June 2020
Yingjie Fei
Zhuoran Yang
Yudong Chen
Zhaoran Wang
Qiaomin Xie
ArXivPDFHTML
Abstract

We study risk-sensitive reinforcement learning in episodic Markov decision processes with unknown transition kernels, where the goal is to optimize the total reward under the risk measure of exponential utility. We propose two provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI) and Risk-Sensitive Q-learning (RSQ). These algorithms implement a form of risk-sensitive optimism in the face of uncertainty, which adapts to both risk-seeking and risk-averse modes of exploration. We prove that RSVI attains an O~(λ(∣β∣H2)⋅H3S2AT)\tilde{O}\big(\lambda(|\beta| H^2) \cdot \sqrt{H^{3} S^{2}AT} \big)O~(λ(∣β∣H2)⋅H3S2AT​) regret, while RSQ attains an O~(λ(∣β∣H2)⋅H4SAT)\tilde{O}\big(\lambda(|\beta| H^2) \cdot \sqrt{H^{4} SAT} \big)O~(λ(∣β∣H2)⋅H4SAT​) regret, where λ(u)=(e3u−1)/u\lambda(u) = (e^{3u}-1)/uλ(u)=(e3u−1)/u for u>0u>0u>0. In the above, β\betaβ is the risk parameter of the exponential utility function, SSS the number of states, AAA the number of actions, TTT the total number of timesteps, and HHH the episode length. On the flip side, we establish a regret lower bound showing that the exponential dependence on ∣β∣|\beta|∣β∣ and HHH is unavoidable for any algorithm with an O~(T)\tilde{O}(\sqrt{T})O~(T​) regret (even when the risk objective is on the same scale as the original reward), thus certifying the near-optimality of the proposed algorithms. Our results demonstrate that incorporating risk awareness into reinforcement learning necessitates an exponential cost in ∣β∣|\beta|∣β∣ and HHH, which quantifies the fundamental tradeoff between risk sensitivity (related to aleatoric uncertainty) and sample efficiency (related to epistemic uncertainty). To the best of our knowledge, this is the first regret analysis of risk-sensitive reinforcement learning with the exponential utility.

View on arXiv
Comments on this paper