ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.07002
19
1

AdvLogo: Adversarial Patch Attack against Object Detectors based on Diffusion Models

11 September 2024
Boming Miao
Chunxiao Li
Yao Zhu
Weixiang Sun
Zizhe Wang
Xiaoyi Wang
Chuanlong Xie
    DiffM
    AAML
ArXivPDFHTML
Abstract

With the rapid development of deep learning, object detectors have demonstrated impressive performance; however, vulnerabilities still exist in certain scenarios. Current research exploring the vulnerabilities using adversarial patches often struggles to balance the trade-off between attack effectiveness and visual quality. To address this problem, we propose a novel framework of patch attack from semantic perspective, which we refer to as AdvLogo. Based on the hypothesis that every semantic space contains an adversarial subspace where images can cause detectors to fail in recognizing objects, we leverage the semantic understanding of the diffusion denoising process and drive the process to adversarial subareas by perturbing the latent and unconditional embeddings at the last timestep. To mitigate the distribution shift that exposes a negative impact on image quality, we apply perturbation to the latent in frequency domain with the Fourier Transform. Experimental results demonstrate that AdvLogo achieves strong attack performance while maintaining high visual quality.

View on arXiv
@article{miao2025_2409.07002,
  title={ AdvLogo: Adversarial Patch Attack against Object Detectors based on Diffusion Models },
  author={ Boming Miao and Chunxiao Li and Yao Zhu and Weixiang Sun and Zizhe Wang and Xiaoyi Wang and Chuanlong Xie },
  journal={arXiv preprint arXiv:2409.07002},
  year={ 2025 }
}
Comments on this paper