ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.10307
  4. Cited By
Backdoor Embedding in Convolutional Neural Network Models via Invisible
  Perturbation

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

30 August 2018
C. Liao
Haoti Zhong
Anna Squicciarini
Sencun Zhu
David J. Miller
    SILM
ArXivPDFHTML

Papers citing "Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation"

12 / 62 papers shown
Title
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
22
501
0
05 Jul 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
18
199
0
25 Jun 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
  Networks
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Xia Hu
AAML
13
184
0
15 Jun 2020
A Survey of Convolutional Neural Networks: Analysis, Applications, and
  Prospects
A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects
Zewen Li
Wenjie Yang
Shouheng Peng
Fan Liu
HAI
3DV
54
2,595
0
01 Apr 2020
Generating Semantic Adversarial Examples via Feature Manipulation
Generating Semantic Adversarial Examples via Feature Manipulation
Shuo Wang
Surya Nepal
Carsten Rudolph
M. Grobler
Shangyu Chen
Tianle Chen
AAML
23
12
0
06 Jan 2020
Revealing Perceptible Backdoors, without the Training Set, via the
  Maximum Achievable Misclassification Fraction Statistic
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang
David J. Miller
Hang Wang
G. Kesidis
AAML
26
9
0
18 Nov 2019
Defending Neural Backdoors via Generative Distribution Modeling
Defending Neural Backdoors via Generative Distribution Modeling
Ximing Qiao
Yukun Yang
H. Li
AAML
17
183
0
10 Oct 2019
Detection of Backdoors in Trained Classifiers Without Access to the
  Training Set
Detection of Backdoors in Trained Classifiers Without Access to the Training Set
Zhen Xiang
David J. Miller
G. Kesidis
AAML
11
23
0
27 Aug 2019
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
17
231
0
26 Jun 2019
Adversarial Learning in Statistical Classification: A Comprehensive
  Review of Defenses Against Attacks
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
11
35
0
12 Apr 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security
  Vulnerabilities of Transfer Learning
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILM
AAML
23
46
0
08 Apr 2019
A new Backdoor Attack in CNNs by training set corruption without label
  poisoning
A new Backdoor Attack in CNNs by training set corruption without label poisoning
Mauro Barni
Kassem Kallas
B. Tondi
AAML
24
347
0
12 Feb 2019
Previous
12