ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.10274
  4. Cited By
TOP: Backdoor Detection in Neural Networks via Transferability of
  Perturbation

TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation

18 March 2021
Todd P. Huster
E. Ekwedike
    SILM
ArXivPDFHTML

Papers citing "TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation"

15 / 15 papers shown
Title
Solving Trojan Detection Competitions with Linear Weight Classification
Solving Trojan Detection Competitions with Linear Weight Classification
Todd P. Huster
Peter Lin
Razvan Stefanescu
E. Ekwedike
R. Chadha
AAML
29
0
0
05 Nov 2024
DLP: towards active defense against backdoor attacks with decoupled
  learning process
DLP: towards active defense against backdoor attacks with decoupled learning process
Zonghao Ying
Bin Wu
AAML
44
6
0
18 Jun 2024
Preference Poisoning Attacks on Reward Model Learning
Preference Poisoning Attacks on Reward Model Learning
Junlin Wu
Jiong Wang
Chaowei Xiao
Chenguang Wang
Ning Zhang
Yevgeniy Vorobeychik
AAML
24
5
0
02 Feb 2024
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
Zhen Xiang
Zidi Xiong
Bo-wen Li
AAML
22
20
0
29 May 2023
Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor
  Poisoned Samples in DNNs
Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs
Hasan Hammoud
Adel Bibi
Philip H. S. Torr
Bernard Ghanem
AAML
30
5
0
23 Mar 2023
Single Image Backdoor Inversion via Robust Smoothed Classifiers
Single Image Backdoor Inversion via Robust Smoothed Classifiers
Mingjie Sun
Zico Kolter
AAML
8
12
0
01 Mar 2023
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of
  Backdoor Effects in Trojaned Machine Learning Models
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models
Rui Zhu
Di Tang
Siyuan Tang
XiaoFeng Wang
Haixu Tang
AAML
FedML
29
13
0
09 Dec 2022
Dormant Neural Trojans
Dormant Neural Trojans
Feisi Fu
Panagiota Kiourti
Wenchao Li
AAML
26
0
0
02 Nov 2022
An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks
An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks
Xinqiao Zhang
Huili Chen
Ke Huang
F. Koushanfar
AAML
27
1
0
08 Apr 2022
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
Jie Wang
Ghulam Mubashar Hassan
Naveed Akhtar
AAML
24
24
0
15 Feb 2022
Trigger Hunting with a Topological Prior for Trojan Detection
Trigger Hunting with a Topological Prior for Trojan Detection
Xiaoling Hu
Xiaoyu Lin
Michael Cogswell
Yi Yao
Susmit Jha
Chao Chen
AAML
11
46
0
15 Oct 2021
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Hasan Hammoud
Bernard Ghanem
AAML
18
13
0
12 Sep 2021
Accumulative Poisoning Attacks on Real-time Data
Accumulative Poisoning Attacks on Real-time Data
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
24
20
0
18 Jun 2021
MISA: Online Defense of Trojaned Models using Misattributions
MISA: Online Defense of Trojaned Models using Misattributions
Panagiota Kiourti
Wenchao Li
Anirban Roy
Karan Sikka
Susmit Jha
11
10
0
29 Mar 2021
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
287
0
02 Dec 2018
1