ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.03128
  4. Cited By
Backdooring Convolutional Neural Networks via Targeted Weight
  Perturbations

Backdooring Convolutional Neural Networks via Targeted Weight Perturbations

7 December 2018
Jacob Dumford
Walter J. Scheirer
    AAML
ArXivPDFHTML

Papers citing "Backdooring Convolutional Neural Networks via Targeted Weight Perturbations"

20 / 20 papers shown
Title
Exploring Channel Distinguishability in Local Neighborhoods of the Model Space in Quantum Neural Networks
Exploring Channel Distinguishability in Local Neighborhoods of the Model Space in Quantum Neural Networks
Sabrina Herbst
S. S. Cranganore
Vincenzo De Maio
Ivona Brandić
48
0
0
17 Feb 2025
End-to-End Anti-Backdoor Learning on Images and Time Series
End-to-End Anti-Backdoor Learning on Images and Time Series
Yujing Jiang
Xingjun Ma
S. Erfani
Yige Li
James Bailey
40
1
0
06 Jan 2024
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor
  Attack
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack
Sze Jue Yang
Q. Nguyen
Chee Seng Chan
Khoa D. Doan
AAML
DiffM
26
0
0
31 Aug 2023
Beating Backdoor Attack at Its Own Game
Beating Backdoor Attack at Its Own Game
Min Liu
Alberto L. Sangiovanni-Vincentelli
Xiangyu Yue
AAML
65
11
0
28 Jul 2023
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious
  Correlation
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation
Xuanli He
Qiongkai Xu
Jun Wang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
29
18
0
19 May 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
25
20
0
14 Feb 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for
  Federated Learning
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
32
0
0
28 Dec 2022
Federated Learning Attacks and Defenses: A Survey
Federated Learning Attacks and Defenses: A Survey
Yao Chen
Yijie Gui
Hong Lin
Wensheng Gan
Yongdong Wu
FedML
38
29
0
27 Nov 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Khoa D. Doan
Yingjie Lao
Ping Li
34
40
0
17 Oct 2022
Understanding Impacts of Task Similarity on Backdoor Attack and
  Detection
Understanding Impacts of Task Similarity on Backdoor Attack and Detection
Di Tang
Rui Zhu
XiaoFeng Wang
Haixu Tang
Yi Chen
AAML
11
5
0
12 Oct 2022
Augmentation Backdoors
Augmentation Backdoors
J. Rance
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
AAML
SILM
53
7
0
29 Sep 2022
Hide and Seek: on the Stealthiness of Attacks against Deep Learning
  Systems
Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems
Zeyan Liu
Fengjun Li
Jingqiang Lin
Zhu Li
Bo Luo
AAML
15
1
0
31 May 2022
Neighboring Backdoor Attacks on Graph Convolutional Network
Neighboring Backdoor Attacks on Graph Convolutional Network
Liang Chen
Qibiao Peng
Jintang Li
Yang Liu
Jiawei Chen
Yong Li
Zibin Zheng
GNN
AAML
27
11
0
17 Jan 2022
How to Inject Backdoors with Better Consistency: Logit Anchoring on
  Clean Data
How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data
Zhiyuan Zhang
Lingjuan Lyu
Weiqiang Wang
Lichao Sun
Xu Sun
21
35
0
03 Sep 2021
Turning Federated Learning Systems Into Covert Channels
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
37
10
0
21 Apr 2021
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
  DNN Accelerators
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
AAML
MQ
20
18
0
16 Apr 2021
Robust Backdoor Attacks against Deep Neural Networks in Real Physical
  World
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World
Mingfu Xue
Can He
Shichang Sun
Jian Wang
Weiqiang Liu
AAML
31
43
0
15 Apr 2021
Relating Adversarially Robust Generalization to Flat Minima
Relating Adversarially Robust Generalization to Flat Minima
David Stutz
Matthias Hein
Bernt Schiele
OOD
27
65
0
09 Apr 2021
Black-box Detection of Backdoor Attacks with Limited Information and
  Data
Black-box Detection of Backdoor Attacks with Limited Information and Data
Yinpeng Dong
Xiao Yang
Zhijie Deng
Tianyu Pang
Zihao Xiao
Hang Su
Jun Zhu
AAML
15
112
0
24 Mar 2021
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,136
0
06 Jun 2015
1