ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.06063
  4. Cited By
Robust Reinforcement Learning via Adversarial training with Langevin
  Dynamics
v1v2 (latest)

Robust Reinforcement Learning via Adversarial training with Langevin Dynamics

Neural Information Processing Systems (NeurIPS), 2020
14 February 2020
Parameswaran Kamalaruban
Yu-ting Huang
Ya-Ping Hsieh
Paul Rolland
C. Shi
Volkan Cevher
ArXiv (abs)PDFHTML

Papers citing "Robust Reinforcement Learning via Adversarial training with Langevin Dynamics"

19 / 19 papers shown
Off-Policy Actor-Critic for Adversarial Observation Robustness: Virtual Alternative Training via Symmetric Policy Evaluation
Off-Policy Actor-Critic for Adversarial Observation Robustness: Virtual Alternative Training via Symmetric Policy Evaluation
Kosuke Nakanishi
Akihiro Kubo
Yuji Yasui
Shin Ishii
AAMLOffRL
320
0
0
20 Jun 2025
Robust Reinforcement Learning through Efficient Adversarial Herding
Robust Reinforcement Learning through Efficient Adversarial Herding
Juncheng Dong
Hao-Lun Hsu
Qitong Gao
Vahid Tarokh
Miroslav Pajic
198
4
0
12 Jun 2023
Max-Min Off-Policy Actor-Critic Method Focusing on Worst-Case Robustness
  to Model Misspecification
Max-Min Off-Policy Actor-Critic Method Focusing on Worst-Case Robustness to Model MisspecificationNeural Information Processing Systems (NeurIPS), 2022
Takumi Tanabe
Reimi Sato
Kazuto Fukuchi
Jun Sakuma
Youhei Akimoto
OffRL
369
16
0
07 Nov 2022
Quantification before Selection: Active Dynamics Preference for Robust
  Reinforcement Learning
Quantification before Selection: Active Dynamics Preference for Robust Reinforcement Learning
Kang Xu
Yan Ma
Wei Li
423
0
0
23 Sep 2022
Identifiability and generalizability from multiple experts in Inverse
  Reinforcement Learning
Identifiability and generalizability from multiple experts in Inverse Reinforcement LearningNeural Information Processing Systems (NeurIPS), 2022
Paul Rolland
Luca Viano
Norman Schuerhoff
Boris Nikolov
Volkan Cevher
OffRL
359
19
0
22 Sep 2022
Robust Reinforcement Learning in Continuous Control Tasks with
  Uncertainty Set Regularization
Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set RegularizationConference on Robot Learning (CoRL), 2022
Yuan Zhang
Jianhong Wang
Joschka Boedecker
493
6
0
05 Jul 2022
No-Regret Learning in Games with Noisy Feedback: Faster Rates and
  Adaptivity via Learning Rate Separation
No-Regret Learning in Games with Noisy Feedback: Faster Rates and Adaptivity via Learning Rate SeparationNeural Information Processing Systems (NeurIPS), 2022
Yu-Guan Hsieh
Kimon Antonakopoulos
Volkan Cevher
P. Mertikopoulos
372
33
0
13 Jun 2022
RORL: Robust Offline Reinforcement Learning via Conservative Smoothing
RORL: Robust Offline Reinforcement Learning via Conservative SmoothingNeural Information Processing Systems (NeurIPS), 2022
Rui Yang
Chenjia Bai
Xiaoteng Ma
Zhaoran Wang
Chongjie Zhang
Lei Han
OffRL
596
109
0
06 Jun 2022
Robust Reinforcement Learning as a Stackelberg Game via
  Adaptively-Regularized Adversarial Training
Robust Reinforcement Learning as a Stackelberg Game via Adaptively-Regularized Adversarial TrainingInternational Joint Conference on Artificial Intelligence (IJCAI), 2022
Peide Huang
Mengdi Xu
Fei Fang
Ding Zhao
382
48
0
19 Feb 2022
User-Oriented Robust Reinforcement Learning
User-Oriented Robust Reinforcement LearningAAAI Conference on Artificial Intelligence (AAAI), 2022
Haoyi You
Beichen Yu
Haiming Jin
Zhaoxing Yang
Jiahui Sun
OffRL
426
1
0
15 Feb 2022
Probabilistically Robust Learning: Balancing Average- and Worst-case
  Performance
Probabilistically Robust Learning: Balancing Average- and Worst-case PerformanceInternational Conference on Machine Learning (ICML), 2022
Avi Schwarzschild
Luiz F. O. Chamon
George J. Pappas
Hamed Hassani
AAMLOOD
447
50
0
02 Feb 2022
ACReL: Adversarial Conditional value-at-risk Reinforcement Learning
ACReL: Adversarial Conditional value-at-risk Reinforcement Learning
Mathieu Godbout
M. Heuillet
Sharath Chandra
R. Bhati
Audrey Durand
264
2
0
20 Sep 2021
Robust Predictable Control
Robust Predictable ControlNeural Information Processing Systems (NeurIPS), 2021
Benjamin Eysenbach
Ruslan Salakhutdinov
Sergey Levine
OffRL
337
50
0
07 Sep 2021
Policy Smoothing for Provably Robust Reinforcement Learning
Policy Smoothing for Provably Robust Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2021
Aounon Kumar
Alexander Levine
Soheil Feizi
AAML
351
62
0
21 Jun 2021
Combining Pessimism with Optimism for Robust and Efficient Model-Based
  Deep Reinforcement Learning
Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement LearningInternational Conference on Machine Learning (ICML), 2021
Sebastian Curi
Ilija Bogunovic
Andreas Krause
252
18
0
18 Mar 2021
Maximum Entropy RL (Provably) Solves Some Robust RL Problems
Maximum Entropy RL (Provably) Solves Some Robust RL ProblemsInternational Conference on Learning Representations (ICLR), 2021
Benjamin Eysenbach
Sergey Levine
OOD
335
241
0
10 Mar 2021
Robust Reinforcement Learning using Adversarial Populations
Robust Reinforcement Learning using Adversarial Populations
Eugene Vinitsky
Yuqing Du
Kanaad Parvate
Kathy Jang
Pieter Abbeel
Alexandre M. Bayen
AAML
372
94
0
04 Aug 2020
Robust Inverse Reinforcement Learning under Transition Dynamics Mismatch
Robust Inverse Reinforcement Learning under Transition Dynamics Mismatch
Luca Viano
Yu-ting Huang
Parameswaran Kamalaruban
Adrian Weller
Volkan Cevher
373
36
0
02 Jul 2020
The limits of min-max optimization algorithms: convergence to spurious
  non-critical sets
The limits of min-max optimization algorithms: convergence to spurious non-critical sets
Ya-Ping Hsieh
P. Mertikopoulos
Volkan Cevher
383
93
0
16 Jun 2020
1
Page 1 of 1