Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.04471
Cited By
Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning
9 October 2021
Guanlin Liu
Lifeng Lai
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning"
24 / 24 papers shown
Title
Online Poisoning Attack Against Reinforcement Learning under Black-box Environments
Jianhui Li
Bokang Zhang
Junfeng Wu
AAML
OffRL
OnRL
85
1
0
01 Dec 2024
Provably Efficient Action-Manipulation Attack Against Continuous Reinforcement Learning
Zhi Luo
X. J. Yang
Pan Zhou
D. Wang
AAML
62
0
0
20 Nov 2024
Stealthy Adversarial Attacks on Stochastic Multi-Armed Bandits
Zhiwei Wang
Huazheng Wang
Hongning Wang
AAML
25
0
0
21 Feb 2024
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents
Wenkai Yang
Xiaohan Bi
Yankai Lin
Sishuo Chen
Jie Zhou
Xu Sun
LLMAG
AAML
20
52
0
17 Feb 2024
Privacy and Security Implications of Cloud-Based AI Services : A Survey
Alka Luqman
Riya Mahesh
Anupam Chattopadhyay
17
2
0
31 Jan 2024
Camouflage Adversarial Attacks on Multiple Agent Systems
Ziqing Lu
Guanlin Liu
Lifeng Lai
Weiyu Xu
AAML
13
0
0
30 Jan 2024
BadRL: Sparse Targeted Backdoor Attack Against Reinforcement Learning
Jing Cui
Yufei Han
Yuzhe Ma
Jianbin Jiao
Junge Zhang
AAML
19
12
0
19 Dec 2023
Optimal Attack and Defense for Reinforcement Learning
Jeremy McMahan
Young Wu
Xiaojin Zhu
Qiaomin Xie
AAML
OffRL
19
8
0
30 Nov 2023
Optimal Cost Constrained Adversarial Attacks For Multiple Agent Systems
Ziqing Lu
Guanlin Liu
Lifeng Lai
Weiyu Xu
AAML
19
1
0
01 Nov 2023
Efficient Adversarial Attacks on Online Multi-agent Reinforcement Learning
Guanlin Liu
Lifeng Lai
AAML
33
6
0
15 Jul 2023
Efficient Action Robust Reinforcement Learning with Probabilistic Policy Execution Uncertainty
Guanin Liu
Zhihan Zhou
Han Liu
Lifeng Lai
15
1
0
15 Jul 2023
Data Poisoning to Fake a Nash Equilibrium in Markov Games
Young Wu
Jeremy McMahan
Xiaojin Zhu
Qiaomin Xie
OffRL
13
2
0
13 Jun 2023
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
AAML
OffRL
11
9
0
27 Feb 2023
Adversarial Attacks on Adversarial Bandits
Yuzhe Ma
Zhijin Zhou
AAML
8
8
0
30 Jan 2023
A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Ambra Demontis
Maura Pintor
Luca Demetrio
Kathrin Grosse
Hsiao-Ying Lin
Chengfang Fang
Battista Biggio
Fabio Roli
AAML
27
4
0
12 Dec 2022
Reward Poisoning Attacks on Offline Multi-Agent Reinforcement Learning
Young Wu
Jermey McMahan
Xiaojin Zhu
Qiaomin Xie
AAML
OffRL
14
15
0
04 Jun 2022
Efficient Reward Poisoning Attacks on Online Deep Reinforcement Learning
Yinglun Xu
Qi Zeng
Gagandeep Singh
AAML
14
5
0
30 May 2022
Reinforcement Learning for Linear Quadratic Control is Vulnerable Under Cost Manipulation
Yunhan Huang
Quanyan Zhu
OffRL
AAML
21
4
0
11 Mar 2022
On the Convergence and Robustness of Adversarial Training
Yisen Wang
Xingjun Ma
James Bailey
Jinfeng Yi
Bowen Zhou
Quanquan Gu
AAML
192
344
0
15 Dec 2021
Efficient Action Poisoning Attacks on Linear Contextual Bandits
Guanlin Liu
Lifeng Lai
AAML
14
4
0
10 Dec 2021
When Are Linear Stochastic Bandits Attackable?
Huazheng Wang
Haifeng Xu
Hongning Wang
AAML
21
10
0
18 Oct 2021
Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments
Amin Rakhsha
Xuezhou Zhang
Xiaojin Zhu
Adish Singla
AAML
OffRL
29
37
0
16 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
8
26
0
10 Feb 2021
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
256
3,102
0
04 Nov 2016
1