ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.13851
  4. Cited By
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning:
  Adversarial Policies for Training-Time Attacks

Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks

27 February 2023
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
    AAML
    OffRL
ArXivPDFHTML

Papers citing "Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks"

4 / 4 papers shown
Title
UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning
Oubo Ma
L. Du
Yang Dai
Chunyi Zhou
Qingming Li
Yuwen Pu
Shouling Ji
41
0
0
28 Jan 2025
Hiding in Plain Sight: Differential Privacy Noise Exploitation for
  Evasion-resilient Localized Poisoning Attacks in Multiagent Reinforcement
  Learning
Hiding in Plain Sight: Differential Privacy Noise Exploitation for Evasion-resilient Localized Poisoning Attacks in Multiagent Reinforcement Learning
Md Tamjid Hossain
Hung M. La
AAML
11
0
0
01 Jul 2023
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning
Lun Wang
Zaynah Javed
Xian Wu
Wenbo Guo
Xinyu Xing
D. Song
AAML
110
64
0
02 May 2021
Robust Reinforcement Learning on State Observations with Learned Optimal
  Adversary
Robust Reinforcement Learning on State Observations with Learned Optimal Adversary
Huan Zhang
Hongge Chen
Duane S. Boning
Cho-Jui Hsieh
59
162
0
21 Jan 2021
1