ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12457
  4. Cited By
A backdoor attack against LSTM-based text classification systems

A backdoor attack against LSTM-based text classification systems

29 May 2019
Jiazhu Dai
Chuanshuai Chen
    SILM
ArXivPDFHTML

Papers citing "A backdoor attack against LSTM-based text classification systems"

45 / 195 papers shown
Title
A General Framework for Defending Against Backdoor Attacks via Influence
  Graph
A General Framework for Defending Against Backdoor Attacks via Influence Graph
Xiaofei Sun
Jiwei Li
Xiaoya Li
Ziyao Wang
Tianwei Zhang
Han Qiu
Fei Wu
Chun Fan
AAML
TDI
24
5
0
29 Nov 2021
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural
  Networks
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
35
57
0
25 Nov 2021
An Overview of Backdoor Attacks Against Deep Neural Networks and
  Possible Defences
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences
Wei Guo
B. Tondi
Mauro Barni
AAML
37
66
0
16 Nov 2021
10 Security and Privacy Problems in Large Foundation Models
10 Security and Privacy Problems in Large Foundation Models
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
19
7
0
28 Oct 2021
Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks
Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks
Yangyi Chen
Fanchao Qi
Hongcheng Gao
Zhiyuan Liu
Maosong Sun
SILM
26
22
0
15 Oct 2021
RAP: Robustness-Aware Perturbations for Defending against Backdoor
  Attacks on NLP Models
RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models
Wenkai Yang
Yankai Lin
Peng Li
Jie Zhou
Xu Sun
SILM
AAML
34
103
0
15 Oct 2021
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text
  Style Transfer
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Fanchao Qi
Yangyi Chen
Xurui Zhang
Mukai Li
Zhiyuan Liu
Maosong Sun
AAML
SILM
82
175
0
14 Oct 2021
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation
  Models
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models
Kangjie Chen
Yuxian Meng
Xiaofei Sun
Shangwei Guo
Tianwei Zhang
Jiwei Li
Chun Fan
SILM
34
106
0
06 Oct 2021
BFClass: A Backdoor-free Text Classification Framework
BFClass: A Backdoor-free Text Classification Framework
Zichao Li
Dheeraj Mekala
Chengyu Dong
Jingbo Shang
SILM
64
27
0
22 Sep 2021
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Neil G. Marchant
Benjamin I. P. Rubinstein
Scott Alfeld
MU
AAML
28
69
0
17 Sep 2021
SanitAIs: Unsupervised Data Augmentation to Sanitize Trojaned Neural
  Networks
SanitAIs: Unsupervised Data Augmentation to Sanitize Trojaned Neural Networks
Kiran Karra
C. Ashcraft
Cash Costello
AAML
37
0
0
09 Sep 2021
Adversarial Parameter Defense by Multi-Step Risk Minimization
Adversarial Parameter Defense by Multi-Step Risk Minimization
Zhiyuan Zhang
Ruixuan Luo
Xuancheng Ren
Qi Su
Liangyou Li
Xu Sun
AAML
25
6
0
07 Sep 2021
How to Inject Backdoors with Better Consistency: Logit Anchoring on
  Clean Data
How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data
Zhiyuan Zhang
Lingjuan Lyu
Weiqiang Wang
Lichao Sun
Xu Sun
21
35
0
03 Sep 2021
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Linyang Li
Demin Song
Xiaonan Li
Jiehang Zeng
Ruotian Ma
Xipeng Qiu
33
135
0
31 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
47
152
0
01 Aug 2021
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
Stefanos Koffas
Jing Xu
Mauro Conti
S. Picek
AAML
27
66
0
30 Jul 2021
Putting words into the system's mouth: A targeted attack on neural
  machine translation using monolingual data poisoning
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoning
Jun Wang
Chang Xu
Francisco Guzman
Ahmed El-Kishky
Yuqing Tang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
SILM
27
33
0
12 Jul 2021
Poisoning Deep Reinforcement Learning Agents with In-Distribution
  Triggers
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers
C. Ashcraft
Kiran Karra
23
22
0
14 Jun 2021
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word
  Substitution
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution
Fanchao Qi
Yuan Yao
Sophia Xu
Zhiyuan Liu
Maosong Sun
SILM
33
126
0
11 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
33
47
0
03 Jun 2021
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
Fanchao Qi
Mukai Li
Yangyi Chen
Zhengyan Zhang
Zhiyuan Liu
Yasheng Wang
Maosong Sun
SILM
19
223
0
26 May 2021
Hidden Backdoors in Human-Centric Language Models
Hidden Backdoors in Human-Centric Language Models
Shaofeng Li
Hui Liu
Tian Dong
Benjamin Zi Hao Zhao
Minhui Xue
Haojin Zhu
Jialiang Lu
SILM
40
147
0
01 May 2021
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability
  of the Embedding Layers in NLP Models
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models
Wenkai Yang
Lei Li
Zhiyuan Zhang
Xuancheng Ren
Xu Sun
Bin He
SILM
26
147
0
29 Mar 2021
T-Miner: A Generative Approach to Defend Against Trojan Attacks on
  DNN-based Text Classification
T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification
A. Azizi
I. A. Tahmid
Asim Waheed
Neal Mangaokar
Jiameng Pu
M. Javed
Chandan K. Reddy
Bimal Viswanath
AAML
25
77
0
07 Mar 2021
Red Alarm for Pre-trained Models: Universal Vulnerability to
  Neuron-Level Backdoor Attacks
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks
Zhengyan Zhang
Guangxuan Xiao
Yongwei Li
Tian Lv
Fanchao Qi
Zhiyuan Liu
Yasheng Wang
Xin Jiang
Maosong Sun
AAML
23
68
0
18 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
Aleksander Madry
Bo Li
Tom Goldstein
SILM
34
271
0
18 Dec 2020
Robustness to Spurious Correlations in Text Classification via
  Automatically Generated Counterfactuals
Robustness to Spurious Correlations in Text Classification via Automatically Generated Counterfactuals
Zhao Wang
A. Culotta
CML
OOD
20
99
0
18 Dec 2020
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
Fanchao Qi
Yangyi Chen
Mukai Li
Yuan Yao
Zhiyuan Liu
Maosong Sun
AAML
45
266
0
20 Nov 2020
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly
  Detection
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly Detection
Hao Fu
A. Veldanda
Prashanth Krishnamurthy
S. Garg
Farshad Khorrami
AAML
35
14
0
04 Nov 2020
Concealed Data Poisoning Attacks on NLP Models
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
29
18
0
23 Oct 2020
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural
  Networks for Detection and Training Set Cleansing
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing
Zhen Xiang
David J. Miller
G. Kesidis
35
22
0
15 Oct 2020
An Evasion Attack against Stacked Capsule Autoencoder
An Evasion Attack against Stacked Capsule Autoencoder
Jiazhu Dai
Siwei Xiong
AAML
32
1
0
14 Oct 2020
Identifying Spurious Correlations for Robust Text Classification
Identifying Spurious Correlations for Robust Text Classification
Zhao Wang
A. Culotta
OOD
11
76
0
06 Oct 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
221
0
21 Jul 2020
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
592
0
17 Jul 2020
Mitigating backdoor attacks in LSTM-based Text Classification Systems by
  Backdoor Keyword Identification
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification
Chuanshuai Chen
Jiazhu Dai
SILM
63
125
0
11 Jul 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
22
505
0
05 Jul 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
21
162
0
22 Jun 2020
Exploring the Vulnerability of Deep Neural Networks: A Study of
  Parameter Corruption
Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption
Xu Sun
Zhiyuan Zhang
Xuancheng Ren
Ruixuan Luo
Liangyou Li
30
39
0
10 Jun 2020
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving
  Improvements
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements
Xiaoyi Chen
A. Salem
Dingfan Chen
Michael Backes
Shiqing Ma
Qingni Shen
Zhonghai Wu
Yang Zhang
SILM
32
228
0
01 Jun 2020
Weight Poisoning Attacks on Pre-trained Models
Weight Poisoning Attacks on Pre-trained Models
Keita Kurita
Paul Michel
Graham Neubig
AAML
SILM
43
434
0
14 Apr 2020
The TrojAI Software Framework: An OpenSource tool for Embedding Trojans
  into Deep Learning Models
The TrojAI Software Framework: An OpenSource tool for Embedding Trojans into Deep Learning Models
Kiran Karra
C. Ashcraft
Neil Fendley
27
35
0
13 Mar 2020
NNoculation: Catching BadNets in the Wild
NNoculation: Catching BadNets in the Wild
A. Veldanda
Kang Liu
Benjamin Tan
Prashanth Krishnamurthy
Farshad Khorrami
Ramesh Karri
Brendan Dolan-Gavitt
S. Garg
AAML
OnRL
21
20
0
19 Feb 2020
Revealing Perceptible Backdoors, without the Training Set, via the
  Maximum Achievable Misclassification Fraction Statistic
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang
David J. Miller
Hang Wang
G. Kesidis
AAML
34
9
0
18 Nov 2019
Coverage Guided Testing for Recurrent Neural Networks
Coverage Guided Testing for Recurrent Neural Networks
Wei Huang
Youcheng Sun
Xing-E. Zhao
James Sharp
Wenjie Ruan
Jie Meng
Xiaowei Huang
AAML
40
47
0
05 Nov 2019
Previous
1234