ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15293
46
0

Test-Time Backdoor Detection for Object Detection Models

19 March 2025
Hangtao Zhang
Yichen Wang
Shihui Yan
Chenyu Zhu
Ziqi Zhou
Linshan Hou
Shengshan Hu
Minghui Li
Yanjun Zhang
L. Zhang
    AAML
ArXivPDFHTML
Abstract

Object detection models are vulnerable to backdoor attacks, where attackers poison a small subset of training samples by embedding a predefined trigger to manipulate prediction. Detecting poisoned samples (i.e., those containing triggers) at test time can prevent backdoor activation. However, unlike image classification tasks, the unique characteristics of object detection -- particularly its output of numerous objects -- pose fresh challenges for backdoor detection. The complex attack effects (e.g., "ghost" object emergence or "vanishing" object) further render current defenses fundamentally inadequate. To this end, we design TRAnsformation Consistency Evaluation (TRACE), a brand-new method for detecting poisoned samples at test time in object detection. Our journey begins with two intriguing observations: (1) poisoned samples exhibit significantly more consistent detection results than clean ones across varied backgrounds. (2) clean samples show higher detection consistency when introduced to different focal information. Based on these phenomena, TRACE applies foreground and background transformations to each test sample, then assesses transformation consistency by calculating the variance in objects confidences. TRACE achieves black-box, universal backdoor detection, with extensive experiments showing a 30% improvement in AUROC over state-of-the-art defenses and resistance to adaptive attacks.

View on arXiv
@article{zhang2025_2503.15293,
  title={ Test-Time Backdoor Detection for Object Detection Models },
  author={ Hangtao Zhang and Yichen Wang and Shihui Yan and Chenyu Zhu and Ziqi Zhou and Linshan Hou and Shengshan Hu and Minghui Li and Yanjun Zhang and Leo Yu Zhang },
  journal={arXiv preprint arXiv:2503.15293},
  year={ 2025 }
}
Comments on this paper