ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.01546
  4. Cited By
Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy
v1v2 (latest)

Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy

5 November 2019
Ramtin Keramati
Christoph Dann
Alex Tamkin
Emma Brunskill
ArXiv (abs)PDFHTML

Papers citing "Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy"

3 / 53 papers shown
Title
Risk-Averse Bayes-Adaptive Reinforcement Learning
Risk-Averse Bayes-Adaptive Reinforcement LearningNeural Information Processing Systems (NeurIPS), 2025
Marc Rigter
Bruno Lacerda
Nick Hawes
122
44
0
10 Feb 2021
Cautious Reinforcement Learning via Distributional Risk in the Dual
  Domain
Cautious Reinforcement Learning via Distributional Risk in the Dual Domain
Junyu Zhang
Amrit Singh Bedi
Mengdi Wang
Alec Koppel
102
28
0
27 Feb 2020
Stochastically Dominant Distributional Reinforcement Learning
Stochastically Dominant Distributional Reinforcement Learning
John D. Martin
Michal Lyskawinski
Xiaohu Li
Brendan Englot
127
24
0
17 May 2019
Previous
12