Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.13851
Cited By
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks
27 February 2023
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
AAML
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks"
4 / 4 papers shown
Title
UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning
Oubo Ma
L. Du
Yang Dai
Chunyi Zhou
Qingming Li
Yuwen Pu
Shouling Ji
41
0
0
28 Jan 2025
Hiding in Plain Sight: Differential Privacy Noise Exploitation for Evasion-resilient Localized Poisoning Attacks in Multiagent Reinforcement Learning
Md Tamjid Hossain
Hung M. La
AAML
11
0
0
01 Jul 2023
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning
Lun Wang
Zaynah Javed
Xian Wu
Wenbo Guo
Xinyu Xing
D. Song
AAML
107
64
0
02 May 2021
Robust Reinforcement Learning on State Observations with Learned Optimal Adversary
Huan Zhang
Hongge Chen
Duane S. Boning
Cho-Jui Hsieh
59
162
0
21 Jan 2021
1