ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12457
  4. Cited By
A backdoor attack against LSTM-based text classification systems

A backdoor attack against LSTM-based text classification systems

29 May 2019
Jiazhu Dai
Chuanshuai Chen
    SILM
ArXivPDFHTML

Papers citing "A backdoor attack against LSTM-based text classification systems"

50 / 195 papers shown
Title
Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents
Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents
Pengzhou Cheng
Haowen Hu
Zheng Wu
Zongru Wu
Tianjie Ju
Daizong Ding
Zhuosheng Zhang
Gongshen Liu
LLMAG
AAML
10
0
0
20 May 2025
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey
Shashank Kapoor
Sanjay Surendranath Girija
Lakshit Arora
Dipen Pradhan
Ankit Shetgaonkar
Aman Raj
AAML
77
0
0
06 May 2025
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models
Ziyi Wang
Hongwei Li
Rui Zhang
Wenbo Jiang
Kangjie Chen
Tianwei Zhang
Qingchuan Zhao
Guowen Xu
AAML
46
0
0
06 May 2025
The Ultimate Cookbook for Invisible Poison: Crafting Subtle Clean-Label Text Backdoors with Style Attributes
The Ultimate Cookbook for Invisible Poison: Crafting Subtle Clean-Label Text Backdoors with Style Attributes
Wencong You
Daniel Lowd
46
0
0
24 Apr 2025
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts
Qingyue Wang
Qi Pang
Xixun Lin
Shuai Wang
Daoyuan Wu
MoE
64
0
0
24 Apr 2025
Robo-Troj: Attacking LLM-based Task Planners
Robo-Troj: Attacking LLM-based Task Planners
Mohaiminul Al Nahian
Zainab Altaweel
David Reitano
Sabbir Ahmed
Saumitra Lohokare
Shiqi Zhang
Adnan Siraj Rakin
AAML
68
0
0
23 Apr 2025
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Nay Myat Min
Long H. Pham
Yige Li
Jun Sun
AAML
33
0
0
15 Apr 2025
Defending Deep Neural Networks against Backdoor Attacks via Module Switching
Defending Deep Neural Networks against Backdoor Attacks via Module Switching
Weijun Li
Ansh Arora
Xuanli He
Mark Dras
Qiongkai Xu
AAML
MoMe
58
0
0
08 Apr 2025
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data
Dorde Popovic
Amin Sadeghi
Ting Yu
Sanjay Chawla
Issa M. Khalil
AAML
59
0
0
27 Mar 2025
Large Language Models Can Verbatim Reproduce Long Malicious Sequences
Large Language Models Can Verbatim Reproduce Long Malicious Sequences
Sharon Lin
Krishnamurthy
Dvijotham
Jamie Hayes
Chongyang Shi
Ilia Shumailov
Shuang Song
AAML
44
0
0
21 Mar 2025
Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain
Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain
Yuanmin Huang
Mi Zhang
Zhaoxiang Wang
Wenxuan Li
Min Yang
AAML
AI4TS
64
0
0
12 Mar 2025
Life-Cycle Routing Vulnerabilities of LLM Router
Qiqi Lin
Xiaoyang Ji
Shengfang Zhai
Qingni Shen
Zhi-Li Zhang
Yuejian Fang
Yansong Gao
AAML
65
1
0
09 Mar 2025
Char-mander Use mBackdoor! A Study of Cross-lingual Backdoor Attacks in Multilingual LLMs
Char-mander Use mBackdoor! A Study of Cross-lingual Backdoor Attacks in Multilingual LLMs
Himanshu Beniwal
Sailesh Panda
Birudugadda Srivibhav
Mayank Singh
50
0
0
24 Feb 2025
PCAP-Backdoor: Backdoor Poisoning Generator for Network Traffic in CPS/IoT Environments
PCAP-Backdoor: Backdoor Poisoning Generator for Network Traffic in CPS/IoT Environments
Ajesh Koyatan Chathoth
Stephen Lee
47
0
0
26 Jan 2025
MADE: Graph Backdoor Defense with Masked Unlearning
MADE: Graph Backdoor Defense with Masked Unlearning
Xiao Lin amd Mingjie Li
Mingjie Li
Yisen Wang
AAML
95
2
0
03 Jan 2025
Cut the Deadwood Out: Post-Training Model Purification with Selective Module Substitution
Cut the Deadwood Out: Post-Training Model Purification with Selective Module Substitution
Yao Tong
Weijun Li
Xuanli He
Haolan Zhan
Qiongkai Xu
AAML
48
1
0
31 Dec 2024
Double Landmines: Invisible Textual Backdoor Attacks based on
  Dual-Trigger
Double Landmines: Invisible Textual Backdoor Attacks based on Dual-Trigger
Yang Hou
Qiuling Yue
Lujia Chai
Guozhao Liao
Wenbao Han
Wei Ou
40
0
0
23 Dec 2024
Data Free Backdoor Attacks
Data Free Backdoor Attacks
Bochuan Cao
Jinyuan Jia
Chuxuan Hu
Wenbo Guo
Zhen Xiang
Jinghui Chen
Bo Li
Dawn Song
AAML
91
0
0
09 Dec 2024
Gracefully Filtering Backdoor Samples for Generative Large Language
  Models without Retraining
Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining
Zongru Wu
Pengzhou Cheng
Lingyong Fang
Zhuosheng Zhang
Gongshen Liu
AAML
SILM
90
0
0
03 Dec 2024
Quantized Delta Weight Is Safety Keeper
Quantized Delta Weight Is Safety Keeper
Yule Liu
Zhen Sun
Xinlei He
Xinyi Huang
96
2
0
29 Nov 2024
PEFTGuard: Detecting Backdoor Attacks Against Parameter-Efficient
  Fine-Tuning
PEFTGuard: Detecting Backdoor Attacks Against Parameter-Efficient Fine-Tuning
Zhen Sun
Tianshuo Cong
Yule Liu
Chenhao Lin
Xinlei He
Rongmao Chen
Xingshuo Han
Xinyi Huang
AAML
91
3
0
26 Nov 2024
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations
Huaizhi Ge
Yiming Li
Qifan Wang
Yongfeng Zhang
Ruixiang Tang
AAML
SILM
86
0
0
19 Nov 2024
CROW: Eliminating Backdoors from Large Language Models via Internal
  Consistency Regularization
CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization
Nay Myat Min
Long H. Pham
Yige Li
Jun Sun
AAML
69
4
0
18 Nov 2024
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense Evaluation
Haiyang Yu
Tian Xie
Jiaping Gui
Pengyang Wang
P. Yi
Yue Wu
56
1
0
17 Nov 2024
AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
Pankayaraj Pathmanathan
Udari Madhushani Sehwag
Michael-Andrei Panaitescu-Liess
Furong Huang
SILM
AAML
43
0
0
15 Oct 2024
Mind Your Questions! Towards Backdoor Attacks on Text-to-Visualization
  Models
Mind Your Questions! Towards Backdoor Attacks on Text-to-Visualization Models
Shuaimin Li
Yuanfeng Song
Xuanang Chen
Anni Peng
Zhuoyue Wan
Chen Jason Zhang
Raymond Chi-Wing Wong
SILM
31
0
0
09 Oct 2024
BACKTIME: Backdoor Attacks on Multivariate Time Series Forecasting
BACKTIME: Backdoor Attacks on Multivariate Time Series Forecasting
Xiao Lin
Zhining Liu
Dongqi Fu
Ruizhong Qiu
Hanghang Tong
AAML
AI4TS
54
6
0
03 Oct 2024
BadCM: Invisible Backdoor Attack Against Cross-Modal Learning
BadCM: Invisible Backdoor Attack Against Cross-Modal Learning
Zheng Zhang
Xu Yuan
Lei Zhu
Jingkuan Song
Liqiang Nie
AAML
56
12
0
03 Oct 2024
Mitigating Backdoor Threats to Large Language Models: Advancement and
  Challenges
Mitigating Backdoor Threats to Large Language Models: Advancement and Challenges
Qin Liu
Wenjie Mo
Terry Tong
Lyne Tchapmi
Fei Wang
Chaowei Xiao
Muhao Chen
AAML
43
4
0
30 Sep 2024
Learning to Obstruct Few-Shot Image Classification over Restricted
  Classes
Learning to Obstruct Few-Shot Image Classification over Restricted Classes
Amber Yijia Zheng
Chiao-An Yang
Raymond A. Yeh
39
1
0
28 Sep 2024
Data-centric NLP Backdoor Defense from the Lens of Memorization
Data-centric NLP Backdoor Defense from the Lens of Memorization
Zhenting Wang
Zhizhi Wang
Mingyu Jin
Mengnan Du
Juan Zhai
Shiqing Ma
35
3
0
21 Sep 2024
Obliviate: Neutralizing Task-agnostic Backdoors within the
  Parameter-efficient Fine-tuning Paradigm
Obliviate: Neutralizing Task-agnostic Backdoors within the Parameter-efficient Fine-tuning Paradigm
Jaehan Kim
Minkyoo Song
S. Na
Seungwon Shin
AAML
41
0
0
21 Sep 2024
Exploiting the Vulnerability of Large Language Models via Defense-Aware
  Architectural Backdoor
Exploiting the Vulnerability of Large Language Models via Defense-Aware Architectural Backdoor
Abdullah Arafat Miah
Yu Bi
AAML
SILM
37
0
0
03 Sep 2024
CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models
CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models
Rui Zeng
Xi Chen
Yuwen Pu
Xuhong Zhang
Tianyu Du
Shouling Ji
45
2
0
02 Sep 2024
Rethinking Backdoor Detection Evaluation for Language Models
Rethinking Backdoor Detection Evaluation for Language Models
Jun Yan
Wenjie Jacky Mo
Xiang Ren
Robin Jia
ELM
59
3
0
31 Aug 2024
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor
  Attacks on Deep Speech Classification Models
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models
Wenhan Yao
Zedong XingXiarun Chen
Jia Liu
yongqiang He
Weiping Wen
AAML
38
0
0
28 Aug 2024
Large Language Models are Good Attackers: Efficient and Stealthy Textual
  Backdoor Attacks
Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks
Ziqiang Li
Yueqi Zeng
Pengfei Xia
Lei Liu
Zhangjie Fu
Bin Li
SILM
AAML
60
2
0
21 Aug 2024
Operationalizing a Threat Model for Red-Teaming Large Language Models
  (LLMs)
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)
Apurv Verma
Satyapriya Krishna
Sebastian Gehrmann
Madhavan Seshadri
Anu Pradhan
Tom Ault
Leslie Barrett
David Rabinowitz
John Doucette
Nhathai Phan
59
10
0
20 Jul 2024
Flatness-aware Sequential Learning Generates Resilient Backdoors
Flatness-aware Sequential Learning Generates Resilient Backdoors
Hoang Pham
The-Anh Ta
Anh Tran
Khoa D. Doan
FedML
AAML
47
0
0
20 Jul 2024
Defense Against Syntactic Textual Backdoor Attacks with Token
  Substitution
Defense Against Syntactic Textual Backdoor Attacks with Token Substitution
Xinglin Li
Xianwen He
Yao Li
Minhao Cheng
26
1
0
04 Jul 2024
Future Events as Backdoor Triggers: Investigating Temporal
  Vulnerabilities in LLMs
Future Events as Backdoor Triggers: Investigating Temporal Vulnerabilities in LLMs
Sara Price
Arjun Panickssery
Sam Bowman
Asa Cooper Stickland
LLMSV
37
3
0
04 Jul 2024
SOS! Soft Prompt Attack Against Open-Source Large Language Models
SOS! Soft Prompt Attack Against Open-Source Large Language Models
Ziqing Yang
Michael Backes
Yang Zhang
Ahmed Salem
AAML
40
6
0
03 Jul 2024
DeepiSign-G: Generic Watermark to Stamp Hidden DNN Parameters for
  Self-contained Tracking
DeepiSign-G: Generic Watermark to Stamp Hidden DNN Parameters for Self-contained Tracking
A. Abuadbba
Nicholas Rhodes
Kristen Moore
Bushra Sabir
Shuo Wang
Yansong Gao
AAML
40
2
0
01 Jul 2024
Attack and Defense of Deep Learning Models in the Field of Web Attack
  Detection
Attack and Defense of Deep Learning Models in the Field of Web Attack Detection
Lijia Shi
Shihao Dong
AAML
41
0
0
18 Jun 2024
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation
  for Embedding Undetectable Vulnerabilities on Speech Recognition
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation for Embedding Undetectable Vulnerabilities on Speech Recognition
Wenhan Yao
Jiangkun Yang
yongqiang He
Jia Liu
Weiping Wen
57
1
0
16 Jun 2024
Chain-of-Scrutiny: Detecting Backdoor Attacks for Large Language Models
Chain-of-Scrutiny: Detecting Backdoor Attacks for Large Language Models
Xi Li
Yusen Zhang
Renze Lou
Chen Wu
Jiaqi Wang
LRM
AAML
45
12
0
10 Jun 2024
PromptFix: Few-shot Backdoor Removal via Adversarial Prompt Tuning
PromptFix: Few-shot Backdoor Removal via Adversarial Prompt Tuning
Tianrong Zhang
Zhaohan Xi
Ting Wang
Prasenjit Mitra
Jinghui Chen
AAML
SILM
35
2
0
06 Jun 2024
TrojFM: Resource-efficient Backdoor Attacks against Very Large
  Foundation Models
TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models
Yuzhou Nie
Yanting Wang
Jinyuan Jia
Michael J. De Lucia
Nathaniel D. Bastian
Wenbo Guo
Dawn Song
SILM
AAML
38
5
0
27 May 2024
SEEP: Training Dynamics Grounds Latent Representation Search for
  Mitigating Backdoor Poisoning Attacks
SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks
Xuanli He
Qiongkai Xu
Jun Wang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
42
4
0
19 May 2024
BadActs: A Universal Backdoor Defense in the Activation Space
BadActs: A Universal Backdoor Defense in the Activation Space
Biao Yi
Sishuo Chen
Yiming Li
Tong Li
Baolei Zhang
Zheli Liu
AAML
50
6
0
18 May 2024
1234
Next