ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12457
  4. Cited By
A backdoor attack against LSTM-based text classification systems

A backdoor attack against LSTM-based text classification systems

29 May 2019
Jiazhu Dai
Chuanshuai Chen
    SILM
ArXivPDFHTML

Papers citing "A backdoor attack against LSTM-based text classification systems"

50 / 195 papers shown
Title
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious
  Correlation
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation
Xuanli He
Qiongkai Xu
Jun Wang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
39
18
0
19 May 2023
A Survey of Safety and Trustworthiness of Large Language Models through
  the Lens of Verification and Validation
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
Xiaowei Huang
Wenjie Ruan
Wei Huang
Gao Jin
Yizhen Dong
...
Sihao Wu
Peipei Xu
Dengyu Wu
André Freitas
Mustafa A. Mustafa
ALM
52
83
0
19 May 2023
UOR: Universal Backdoor Attacks on Pre-trained Language Models
UOR: Universal Backdoor Attacks on Pre-trained Language Models
Wei Du
Peixuan Li
Bo Li
Haodong Zhao
Gongshen Liu
AAML
44
7
0
16 May 2023
Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous
  Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Zhiyuan Zhang
Deli Chen
Hao Zhou
Fandong Meng
Jie Zhou
Xu Sun
36
5
0
08 May 2023
Text-to-Image Diffusion Models can be Easily Backdoored through
  Multimodal Data Poisoning
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning
Shengfang Zhai
Yinpeng Dong
Qingni Shen
Shih-Chieh Pu
Yuejian Fang
Hang Su
38
72
0
07 May 2023
Backdoor Learning on Sequence to Sequence Models
Backdoor Learning on Sequence to Sequence Models
Lichang Chen
Minhao Cheng
Heng-Chiao Huang
SILM
61
18
0
03 May 2023
Defending against Insertion-based Textual Backdoor Attacks via
  Attribution
Defending against Insertion-based Textual Backdoor Attacks via Attribution
Jiazhao Li
Zhuofeng Wu
Ming-Yu Liu
Chaowei Xiao
V. Vydiswaran
48
23
0
03 May 2023
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox
  Generative Model Trigger
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger
Jiazhao Li
Yijin Yang
Zhuofeng Wu
V. Vydiswaran
Chaowei Xiao
SILM
69
42
0
27 Apr 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive
  Learning
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
39
42
0
06 Mar 2023
NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning
NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning
Shengfang Zhai
Qingni Shen
Xiaoyi Chen
Weilong Wang
Cong Li
Yuejian Fang
Zhonghai Wu
AAML
50
8
0
03 Mar 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
40
20
0
14 Feb 2023
Mithridates: Auditing and Boosting Backdoor Resistance of Machine
  Learning Pipelines
Mithridates: Auditing and Boosting Backdoor Resistance of Machine Learning Pipelines
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
29
2
0
09 Feb 2023
BDMMT: Backdoor Sample Detection for Language Models through Model
  Mutation Testing
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing
Jiali Wei
Ming Fan
Wenjing Jiao
Wuxia Jin
Ting Liu
AAML
38
11
0
25 Jan 2023
Stealthy Backdoor Attack for Code Models
Stealthy Backdoor Attack for Code Models
Zhou Yang
Bowen Xu
Jie M. Zhang
Hong Jin Kang
Jieke Shi
Junda He
David Lo
AAML
30
65
0
06 Jan 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David Evans
B. Zorn
Robert Sim
SILM
31
33
0
06 Jan 2023
Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Guanhong Tao
Zhenting Wang
Shuyang Cheng
Shiqing Ma
Shengwei An
Yingqi Liu
Guangyu Shen
Zhuo Zhang
Yunshu Mao
Xiangyu Zhang
SILM
25
17
0
29 Nov 2022
A Survey on Backdoor Attack and Defense in Natural Language Processing
A Survey on Backdoor Attack and Defense in Natural Language Processing
Xuan Sheng
Zhaoyang Han
Piji Li
Xiangmao Chang
SILM
24
19
0
22 Nov 2022
Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models
Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models
Zhiyuan Zhang
Lingjuan Lyu
Xingjun Ma
Chenguang Wang
Xu Sun
AAML
23
41
0
18 Oct 2022
Expose Backdoors on the Way: A Feature-Based Efficient Defense against
  Textual Backdoor Attacks
Expose Backdoors on the Way: A Feature-Based Efficient Defense against Textual Backdoor Attacks
Sishuo Chen
Wenkai Yang
Zhiyuan Zhang
Xiaohan Bi
Xu Sun
SILM
AAML
42
25
0
14 Oct 2022
Distributed Distributionally Robust Optimization with Non-Convex
  Objectives
Distributed Distributionally Robust Optimization with Non-Convex Objectives
Yang Jiao
Kai Yang
Dongjin Song
29
11
0
14 Oct 2022
Dim-Krum: Backdoor-Resistant Federated Learning for NLP with
  Dimension-wise Krum-Based Aggregation
Dim-Krum: Backdoor-Resistant Federated Learning for NLP with Dimension-wise Krum-Based Aggregation
Zhiyuan Zhang
Qi Su
Xu Sun
FedML
29
12
0
13 Oct 2022
Detecting Backdoors in Deep Text Classifiers
Detecting Backdoors in Deep Text Classifiers
Youyan Guo
Jun Wang
Trevor Cohn
SILM
42
1
0
11 Oct 2022
CATER: Intellectual Property Protection on Text Generation APIs via
  Conditional Watermarks
CATER: Intellectual Property Protection on Text Generation APIs via Conditional Watermarks
Xuanli He
Qiongkai Xu
Yi Zeng
Lingjuan Lyu
Fangzhao Wu
Jiwei Li
R. Jia
WaLM
188
72
0
19 Sep 2022
BadRes: Reveal the Backdoors through Residual Connection
BadRes: Reveal the Backdoors through Residual Connection
Min He
Tianyu Chen
Haoyi Zhou
Shanghang Zhang
Jianxin Li
24
0
0
15 Sep 2022
DeepHider: A Covert NLP Watermarking Framework Based on Multi-task
  Learning
DeepHider: A Covert NLP Watermarking Framework Based on Multi-task Learning
Long Dai
Jiarong Mao
Xuefeng Fan
Xiaoyi Zhou
21
2
0
09 Aug 2022
Attention Hijacking in Trojan Transformers
Attention Hijacking in Trojan Transformers
Weimin Lyu
Songzhu Zheng
Teng Ma
Haibin Ling
Chao Chen
38
6
0
09 Aug 2022
Black-box Dataset Ownership Verification via Backdoor Watermarking
Black-box Dataset Ownership Verification via Backdoor Watermarking
Yiming Li
Mingyan Zhu
Xue Yang
Yong Jiang
Tao Wei
Shutao Xia
AAML
37
74
0
04 Aug 2022
Catch Me If You Can: Deceiving Stance Detection and Geotagging Models to
  Protect Privacy of Individuals on Twitter
Catch Me If You Can: Deceiving Stance Detection and Geotagging Models to Protect Privacy of Individuals on Twitter
Dilara Doğan
Bahadir Altun
Muhammed Said Zengin
Mucahid Kutlu
Tamer Elsayed
26
2
0
23 Jul 2022
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal
  Fake News Detection
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Jinyin Chen
Chengyu Jia
Haibin Zheng
Ruoxi Chen
Chenbo Fu
AAML
24
10
0
17 Jun 2022
A Unified Evaluation of Textual Backdoor Learning: Frameworks and
  Benchmarks
A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks
Ganqu Cui
Lifan Yuan
Bingxiang He
Yangyi Chen
Zhiyuan Liu
Maosong Sun
AAML
ELM
SILM
26
69
0
17 Jun 2022
Fisher SAM: Information Geometry and Sharpness Aware Minimisation
Fisher SAM: Information Geometry and Sharpness Aware Minimisation
Minyoung Kim
Da Li
S. Hu
Timothy M. Hospedales
AAML
30
70
0
10 Jun 2022
Kallima: A Clean-label Framework for Textual Backdoor Attacks
Kallima: A Clean-label Framework for Textual Backdoor Attacks
Xiaoyi Chen
Yinpeng Dong
Zeyu Sun
Shengfang Zhai
Qingni Shen
Zhonghai Wu
AAML
14
30
0
03 Jun 2022
Defending Against Stealthy Backdoor Attacks
Defending Against Stealthy Backdoor Attacks
Sangeet Sagar
Abhinav Bhatt
Abhijith Srinivas Bidaralli
AAML
51
3
0
27 May 2022
BITE: Textual Backdoor Attacks with Iterative Trigger Injection
BITE: Textual Backdoor Attacks with Iterative Trigger Injection
Jun Yan
Vansh Gupta
Xiang Ren
SILM
33
48
0
25 May 2022
WeDef: Weakly Supervised Backdoor Defense for Text Classification
WeDef: Weakly Supervised Backdoor Defense for Text Classification
Lesheng Jin
Zihan Wang
Jingbo Shang
AAML
40
14
0
24 May 2022
A Study of the Attention Abnormality in Trojaned BERTs
A Study of the Attention Abnormality in Trojaned BERTs
Weimin Lyu
Songzhu Zheng
Teng Ma
Chao Chen
54
56
0
13 May 2022
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Lei Xu
Yangyi Chen
Ganqu Cui
Hongcheng Gao
Zhiyuan Liu
SILM
VPVLM
27
72
0
11 Apr 2022
An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks
An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks
Xinqiao Zhang
Huili Chen
Ke Huang
F. Koushanfar
AAML
41
1
0
08 Apr 2022
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in
  Deep Learning
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning
Arezoo Rajabi
Bhaskar Ramasubramanian
Radha Poovendran
AAML
25
4
0
25 Mar 2022
Towards Robust Stacked Capsule Autoencoder with Hybrid Adversarial
  Training
Towards Robust Stacked Capsule Autoencoder with Hybrid Adversarial Training
Jiazhu Dai
Siwei Xiong
AAML
31
2
0
28 Feb 2022
An Equivalence Between Data Poisoning and Byzantine Gradient Attacks
An Equivalence Between Data Poisoning and Byzantine Gradient Attacks
Sadegh Farhadkhani
R. Guerraoui
L. Hoang
Oscar Villemaud
FedML
24
24
0
17 Feb 2022
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
Jie Wang
Ghulam Mubashar Hassan
Naveed Akhtar
AAML
34
24
0
15 Feb 2022
Threats to Pre-trained Language Models: Survey and Taxonomy
Threats to Pre-trained Language Models: Survey and Taxonomy
Shangwei Guo
Chunlong Xie
Jiwei Li
Lingjuan Lyu
Tianwei Zhang
PILM
27
30
0
14 Feb 2022
Constrained Optimization with Dynamic Bound-scaling for Effective
  NLPBackdoor Defense
Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Guangyu Shen
Yingqi Liu
Guanhong Tao
Qiuling Xu
Zhuo Zhang
Shengwei An
Shiqing Ma
Xinming Zhang
AAML
26
34
0
11 Feb 2022
A Survey on Poisoning Attacks Against Supervised Machine Learning
A Survey on Poisoning Attacks Against Supervised Machine Learning
Wenjun Qiu
AAML
36
9
0
05 Feb 2022
Neighboring Backdoor Attacks on Graph Convolutional Network
Neighboring Backdoor Attacks on Graph Convolutional Network
Liang Chen
Qibiao Peng
Jintang Li
Yang Liu
Jiawei Chen
Yong Li
Zibin Zheng
GNN
AAML
32
11
0
17 Jan 2022
Rethink the Evaluation for Attack Strength of Backdoor Attacks in
  Natural Language Processing
Rethink the Evaluation for Attack Strength of Backdoor Attacks in Natural Language Processing
Lingfeng Shen
Haiyun Jiang
Lemao Liu
Shuming Shi
ELM
22
1
0
09 Jan 2022
Dual-Key Multimodal Backdoors for Visual Question Answering
Dual-Key Multimodal Backdoors for Visual Question Answering
Matthew Walmer
Karan Sikka
Indranil Sur
Abhinav Shrivastava
Susmit Jha
AAML
29
34
0
14 Dec 2021
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural
  Networks
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks
Xi Li
Zhen Xiang
David J. Miller
G. Kesidis
AAML
222
13
0
06 Dec 2021
Safe Distillation Box
Safe Distillation Box
Jingwen Ye
Yining Mao
Mingli Song
Xinchao Wang
Cheng Jin
Xiuming Zhang
AAML
24
13
0
05 Dec 2021
Previous
1234
Next