ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.12965
  4. Cited By
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural
  Networks

Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks

25 November 2021
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
    AAML
ArXivPDFHTML

Papers citing "Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks"

7 / 7 papers shown
Title
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification
Yuan Ma
Xu Ma
Jiankang Wei
Jinmeng Tang
Xiaoyu Zhang
Yilun Lyu
Kehao Chen
Jingtong Huang
74
0
0
22 Dec 2024
Attacks in Adversarial Machine Learning: A Systematic Survey from the
  Life-cycle Perspective
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
39
21
0
19 Feb 2023
FedTracker: Furnishing Ownership Verification and Traceability for
  Federated Learning Model
FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model
Shuo Shao
Wenyuan Yang
Hanlin Gu
Zhan Qin
Lixin Fan
Qiang Yang
Kui Ren
FedML
19
27
0
14 Nov 2022
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset
  Copyright Protection
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Yiming Li
Yang Bai
Yong Jiang
Yong-Liang Yang
Shutao Xia
Bo Li
AAML
30
97
0
27 Sep 2022
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through
  Neural Payload Injection
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
Yuanchun Li
Jiayi Hua
Haoyu Wang
Chunyang Chen
Yunxin Liu
FedML
SILM
81
74
0
18 Jan 2021
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
162
284
0
02 Dec 2018
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
250
5,813
0
08 Jul 2016
1