ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12282
  4. Cited By
CopyCAT: Taking Control of Neural Policies with Constant Attacks
v1v2 (latest)

CopyCAT: Taking Control of Neural Policies with Constant Attacks

29 May 2019
Léonard Hussenot
Matthieu Geist
Olivier Pietquin
    AAML
ArXiv (abs)PDFHTML

Papers citing "CopyCAT: Taking Control of Neural Policies with Constant Attacks"

12 / 12 papers shown
Title
A Novel Bifurcation Method for Observation Perturbation Attacks on
  Reinforcement Learning Agents: Load Altering Attacks on a Cyber Physical
  Power System
A Novel Bifurcation Method for Observation Perturbation Attacks on Reinforcement Learning Agents: Load Altering Attacks on a Cyber Physical Power System
Kiernan Broda-Milian
Ranwa Al-Mallah
H. Dagdougui
AAML
68
0
0
06 Jul 2024
CuDA2: An approach for Incorporating Traitor Agents into Cooperative
  Multi-Agent Systems
CuDA2: An approach for Incorporating Traitor Agents into Cooperative Multi-Agent Systems
Zhen Chen
Yong Liao
Youpeng Zhao
Zipeng Dai
Jian Zhao
AAML
54
0
0
25 Jun 2024
SoK: Adversarial Machine Learning Attacks and Defences in Multi-Agent
  Reinforcement Learning
SoK: Adversarial Machine Learning Attacks and Defences in Multi-Agent Reinforcement Learning
Maxwell Standen
Junae Kim
Claudia Szabo
AAML
66
6
0
11 Jan 2023
Illusory Attacks: Information-Theoretic Detectability Matters in
  Adversarial Attacks
Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks
Tim Franzmeyer
Stephen McAleer
João F. Henriques
Jakob N. Foerster
Philip Torr
Adel Bibi
Christian Schroeder de Witt
AAML
78
8
0
20 Jul 2022
Towards Resilient Artificial Intelligence: Survey and Research Issues
Towards Resilient Artificial Intelligence: Survey and Research Issues
Oliver Eigner
Sebastian Eresheim
Peter Kieseberg
Lukas Daniel Klausner
Martin Pirker
Torsten Priebe
S. Tjoa
Fiammetta Marulli
F. Mercaldo
AI4CE
49
18
0
18 Sep 2021
When and How to Fool Explainable Models (and Humans) with Adversarial
  Examples
When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Jon Vadillo
Roberto Santana
Jose A. Lozano
SILMAAML
99
14
0
05 Jul 2021
CROP: Certifying Robust Policies for Reinforcement Learning through
  Functional Smoothing
CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing
Fan Wu
Linyi Li
Zijian Huang
Yevgeniy Vorobeychik
Ding Zhao
Yue Liu
AAMLOffRL
85
61
0
17 Jun 2021
Real-time Adversarial Perturbations against Deep Reinforcement Learning
  Policies: Attacks and Defenses
Real-time Adversarial Perturbations against Deep Reinforcement Learning Policies: Attacks and Defenses
Buse G. A. Tekgul
Shelly Wang
Samuel Marchal
Nadarajah Asokan
AAMLOffRL
83
6
0
16 Jun 2021
Stealthy and Efficient Adversarial Attacks against Deep Reinforcement
  Learning
Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning
Jianwen Sun
Tianwei Zhang
Xiaofei Xie
Lei Ma
Yan Zheng
Kangjie Chen
Yang Liu
AAML
63
118
0
14 May 2020
Extending Adversarial Attacks to Produce Adversarial Class Probability
  Distributions
Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions
Jon Vadillo
Roberto Santana
Jose A. Lozano
AAML
51
0
0
14 Apr 2020
Adversarial Attacks on Linear Contextual Bandits
Adversarial Attacks on Linear Contextual Bandits
Evrard Garcelon
Baptiste Roziere
Laurent Meunier
Jean Tarbouriech
O. Teytaud
A. Lazaric
Matteo Pirotta
AAML
84
51
0
10 Feb 2020
Challenges and Countermeasures for Adversarial Attacks on Deep
  Reinforcement Learning
Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning
Inaam Ilahi
Muhammad Usama
Junaid Qadir
M. Janjua
Ala I. Al-Fuqaha
D. Hoang
Dusit Niyato
AAML
147
137
0
27 Jan 2020
1