ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.01440
78
0

DiffPatch: Generating Customizable Adversarial Patches using Diffusion Models

2 December 2024
Zhixiang Wang
Guangnan Ye
X. Wang
Siheng Chen
Z. Wang
Xingjun Ma
Yu-Gang Jiang
    AAML
    DiffM
ArXivPDFHTML
Abstract

Physical adversarial patches printed on clothing can enable individuals to evade person detectors, but most existing methods prioritize attack effectiveness over stealthiness, resulting in aesthetically unpleasing patches. While generative adversarial networks and diffusion models can produce more natural-looking patches, they often fail to balance stealthiness with attack effectiveness and lack flexibility for user customization. To address these limitations, we propose DiffPatch, a novel diffusion-based framework for generating customizable and naturalistic adversarial patches. Our approach allows users to start from a reference image (rather than random noise) and incorporates masks to create patches of various shapes, not limited to squares. To preserve the original semantics during the diffusion process, we employ Null-text inversion to map random noise samples to a single input image and generate patches through Incomplete Diffusion Optimization (IDO). Our method achieves attack performance comparable to state-of-the-art non-naturalistic patches while maintaining a natural appearance. Using DiffPatch, we construct AdvT-shirt-1K, the first physical adversarial T-shirt dataset comprising over a thousand images captured in diverse scenarios. AdvT-shirt-1K can serve as a useful dataset for training or testing future defense methods.

View on arXiv
@article{wang2025_2412.01440,
  title={ DiffPatch: Generating Customizable Adversarial Patches using Diffusion Models },
  author={ Zhixiang Wang and Xiaosen Wang and Bo Wang and Siheng Chen and Zhibo Wang and Xingjun Ma and Yu-Gang Jiang },
  journal={arXiv preprint arXiv:2412.01440},
  year={ 2025 }
}
Comments on this paper