ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20844
45
2

Robust Deep Reinforcement Learning in Robotics via Adaptive Gradient-Masked Adversarial Attacks

26 March 2025
Zongyuan Zhang
Tianyang Duan
Zheng Lin
Dong Huang
Zihan Fang
Zekai Sun
Ling Xiong
Hongbin Liang
Heming Cui
Yong Cui
Yue Gao
    AAML
ArXivPDFHTML
Abstract

Deep reinforcement learning (DRL) has emerged as a promising approach for robotic control, but its realworld deployment remains challenging due to its vulnerability to environmental perturbations. Existing white-box adversarial attack methods, adapted from supervised learning, fail to effectively target DRL agents as they overlook temporal dynamics and indiscriminately perturb all state dimensions, limiting their impact on long-term rewards. To address these challenges, we propose the Adaptive Gradient-Masked Reinforcement (AGMR) Attack, a white-box attack method that combines DRL with a gradient-based soft masking mechanism to dynamically identify critical state dimensions and optimize adversarial policies. AGMR selectively allocates perturbations to the most impactful state features and incorporates a dynamic adjustment mechanism to balance exploration and exploitation during training. Extensive experiments demonstrate that AGMR outperforms state-of-the-art adversarial attack methods in degrading the performance of the victim agent and enhances the victim agent's robustness through adversarial defense mechanisms.

View on arXiv
@article{zhang2025_2503.20844,
  title={ Robust Deep Reinforcement Learning in Robotics via Adaptive Gradient-Masked Adversarial Attacks },
  author={ Zongyuan Zhang and Tianyang Duan and Zheng Lin and Dong Huang and Zihan Fang and Zekai Sun and Ling Xiong and Hongbin Liang and Heming Cui and Yong Cui and Yue Gao },
  journal={arXiv preprint arXiv:2503.20844},
  year={ 2025 }
}
Comments on this paper