ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.08411
34
0

A Knowledge-guided Adversarial Defense for Resisting Malicious Visual Manipulation

11 April 2025
Dawei Zhou
Suzhi Gang
Decheng Liu
Tongliang Liu
N. Wang
Xinbo Gao
    AAML
ArXivPDFHTML
Abstract

Malicious applications of visual manipulation have raised serious threats to the security and reputation of users in many fields. To alleviate these issues, adversarial noise-based defenses have been enthusiastically studied in recent years. However, ``data-only" methods tend to distort fake samples in the low-level feature space rather than the high-level semantic space, leading to limitations in resisting malicious manipulation. Frontier research has shown that integrating knowledge in deep learning can produce reliable and generalizable solutions. Inspired by these, we propose a knowledge-guided adversarial defense (KGAD) to actively force malicious manipulation models to output semantically confusing samples. Specifically, in the process of generating adversarial noise, we focus on constructing significant semantic confusions at the domain-specific knowledge level, and exploit a metric closely related to visual perception to replace the general pixel-wise metrics. The generated adversarial noise can actively interfere with the malicious manipulation model by triggering knowledge-guided and perception-related disruptions in the fake samples. To validate the effectiveness of the proposed method, we conduct qualitative and quantitative experiments on human perception and visual quality assessment. The results on two different tasks both show that our defense provides better protection compared to state-of-the-art methods and achieves great generalizability.

View on arXiv
@article{zhou2025_2504.08411,
  title={ A Knowledge-guided Adversarial Defense for Resisting Malicious Visual Manipulation },
  author={ Dawei Zhou and Suzhi Gang and Decheng Liu and Tongliang Liu and Nannan Wang and Xinbo Gao },
  journal={arXiv preprint arXiv:2504.08411},
  year={ 2025 }
}
Comments on this paper