ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.12400
  4. Cited By
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger

Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger

26 May 2021
Fanchao Qi
Mukai Li
Yangyi Chen
Zhengyan Zhang
Zhiyuan Liu
Yasheng Wang
Maosong Sun
    SILM
ArXivPDFHTML

Papers citing "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"

44 / 44 papers shown
Title
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models
Z. Wang
Hongwei Li
Rui Zhang
Wenbo Jiang
Kangjie Chen
Tianwei Zhang
Qingchuan Zhao
Guowen Xu
AAML
46
0
0
06 May 2025
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey
Shashank Kapoor
Sanjay Surendranath Girija
Lakshit Arora
Dipen Pradhan
Ankit Shetgaonkar
Aman Raj
AAML
69
0
0
06 May 2025
Safety in Large Reasoning Models: A Survey
Safety in Large Reasoning Models: A Survey
Cheng Wang
Y. Liu
B. Li
Duzhen Zhang
Z. Li
Junfeng Fang
Bryan Hooi
LRM
145
1
0
24 Apr 2025
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations
Huaizhi Ge
Yiming Li
Qifan Wang
Yongfeng Zhang
Ruixiang Tang
AAML
SILM
81
0
0
19 Nov 2024
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense Evaluation
Haiyang Yu
Tian Xie
Jiaping Gui
Pengyang Wang
P. Yi
Yue Wu
50
1
0
17 Nov 2024
AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
Pankayaraj Pathmanathan
Udari Madhushani Sehwag
Michael-Andrei Panaitescu-Liess
Furong Huang
SILM
AAML
38
0
0
15 Oct 2024
Best-of-Venom: Attacking RLHF by Injecting Poisoned Preference Data
Best-of-Venom: Attacking RLHF by Injecting Poisoned Preference Data
Tim Baumgärtner
Yang Gao
Dana Alon
Donald Metzler
AAML
18
18
0
08 Apr 2024
Two Heads are Better than One: Nested PoE for Robust Defense Against
  Multi-Backdoors
Two Heads are Better than One: Nested PoE for Robust Defense Against Multi-Backdoors
Victoria Graf
Qin Liu
Muhao Chen
AAML
34
8
0
02 Apr 2024
Acquiring Clean Language Models from Backdoor Poisoned Datasets by
  Downscaling Frequency Space
Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space
Zongru Wu
Zhuosheng Zhang
Pengzhou Cheng
Gongshen Liu
AAML
44
4
0
19 Feb 2024
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based
  Agents
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents
Wenkai Yang
Xiaohan Bi
Yankai Lin
Sishuo Chen
Jie Zhou
Xu Sun
LLMAG
AAML
36
53
0
17 Feb 2024
Backdoor Attack against One-Class Sequential Anomaly Detection Models
Backdoor Attack against One-Class Sequential Anomaly Detection Models
He Cheng
Shuhan Yuan
SILM
AAML
19
1
0
15 Feb 2024
OrderBkd: Textual backdoor attack through repositioning
OrderBkd: Textual backdoor attack through repositioning
Irina Alekseevskaia
Konstantin Arkhipenko
24
2
0
12 Feb 2024
Effective backdoor attack on graph neural networks in link prediction tasks
Effective backdoor attack on graph neural networks in link prediction tasks
Jiazhu Dai
Haoyu Sun
GNN
59
3
0
05 Jan 2024
Punctuation Matters! Stealthy Backdoor Attack for Language Models
Punctuation Matters! Stealthy Backdoor Attack for Language Models
Xuan Sheng
Zhicheng Li
Zhaoyang Han
Xiangmao Chang
Piji Li
35
3
0
26 Dec 2023
Efficient Trigger Word Insertion
Efficient Trigger Word Insertion
Yueqi Zeng
Ziqiang Li
Pengfei Xia
Lei Liu
Bin Li
AAML
19
5
0
23 Nov 2023
Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations
Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations
Wenjie Mo
Jiashu Xu
Qin Liu
Jiong Wang
Jun Yan
Chaowei Xiao
Muhao Chen
Muhao Chen
AAML
58
17
0
16 Nov 2023
Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game
Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game
Sam Toyer
Olivia Watkins
Ethan Mendes
Justin Svegliato
Luke Bailey
...
Karim Elmaaroufi
Pieter Abbeel
Trevor Darrell
Alan Ritter
Stuart J. Russell
19
71
0
02 Nov 2023
Setting the Trap: Capturing and Defeating Backdoors in Pretrained
  Language Models through Honeypots
Setting the Trap: Capturing and Defeating Backdoors in Pretrained Language Models through Honeypots
Ruixiang Tang
Jiayi Yuan
Yiming Li
Zirui Liu
Rui Chen
Xia Hu
AAML
36
13
0
28 Oct 2023
Unlearning with Fisher Masking
Unlearning with Fisher Masking
Yufang Liu
Changzhi Sun
Yuanbin Wu
Aimin Zhou
MU
21
5
0
09 Oct 2023
The Trickle-down Impact of Reward (In-)consistency on RLHF
The Trickle-down Impact of Reward (In-)consistency on RLHF
Lingfeng Shen
Sihao Chen
Linfeng Song
Lifeng Jin
Baolin Peng
Haitao Mi
Daniel Khashabi
Dong Yu
27
21
0
28 Sep 2023
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Kai Mei
Zheng Li
Zhenting Wang
Yang Zhang
Shiqing Ma
AAML
SILM
28
48
0
28 May 2023
From Shortcuts to Triggers: Backdoor Defense with Denoised PoE
From Shortcuts to Triggers: Backdoor Defense with Denoised PoE
Qin Liu
Fei Wang
Chaowei Xiao
Muhao Chen
AAML
34
21
0
24 May 2023
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious
  Correlation
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation
Xuanli He
Qiongkai Xu
Jun Wang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
29
18
0
19 May 2023
Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous
  Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Zhiyuan Zhang
Deli Chen
Hao Zhou
Fandong Meng
Jie Zhou
Xu Sun
28
5
0
08 May 2023
Backdoor Learning on Sequence to Sequence Models
Backdoor Learning on Sequence to Sequence Models
Lichang Chen
Minhao Cheng
Heng-Chiao Huang
SILM
52
18
0
03 May 2023
Harnessing the Speed and Accuracy of Machine Learning to Advance
  Cybersecurity
Harnessing the Speed and Accuracy of Machine Learning to Advance Cybersecurity
Khatoon Mohammed
AAML
11
10
0
24 Feb 2023
Detecting software vulnerabilities using Language Models
Detecting software vulnerabilities using Language Models
Marwan Omar
24
11
0
23 Feb 2023
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
31
182
0
20 Feb 2023
RobustNLP: A Technique to Defend NLP Models Against Backdoor Attacks
RobustNLP: A Technique to Defend NLP Models Against Backdoor Attacks
Marwan Omar
SILM
AAML
23
0
0
18 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
28
20
0
14 Feb 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David E. Evans
B. Zorn
Robert Sim
SILM
21
33
0
06 Jan 2023
Robust Smart Home Face Recognition under Starving Federated Data
Robust Smart Home Face Recognition under Starving Federated Data
Jaechul Roh
Yajun Fang
FedML
CVBM
AAML
21
0
0
10 Nov 2022
MSDT: Masked Language Model Scoring Defense in Text Domain
MSDT: Masked Language Model Scoring Defense in Text Domain
Jaechul Roh
Minhao Cheng
Yajun Fang
AAML
15
1
0
10 Nov 2022
Poison Attack and Defense on Deep Source Code Processing Models
Poison Attack and Defense on Deep Source Code Processing Models
Jia Li
Zhuo Li
Huangzhao Zhang
Ge Li
Zhi Jin
Xing Hu
Xin Xia
AAML
35
16
0
31 Oct 2022
WeDef: Weakly Supervised Backdoor Defense for Text Classification
WeDef: Weakly Supervised Backdoor Defense for Text Classification
Lesheng Jin
Zihan Wang
Jingbo Shang
AAML
27
14
0
24 May 2022
RAP: Robustness-Aware Perturbations for Defending against Backdoor
  Attacks on NLP Models
RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models
Wenkai Yang
Yankai Lin
Peng Li
Jie Zhou
Xu Sun
SILM
AAML
24
102
0
15 Oct 2021
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text
  Style Transfer
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Fanchao Qi
Yangyi Chen
Xurui Zhang
Mukai Li
Zhiyuan Liu
Maosong Sun
AAML
SILM
82
175
0
14 Oct 2021
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation
  Models
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models
Kangjie Chen
Yuxian Meng
Xiaofei Sun
Shangwei Guo
Tianwei Zhang
Jiwei Li
Chun Fan
SILM
23
105
0
06 Oct 2021
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word
  Substitution
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution
Fanchao Qi
Yuan Yao
Sophia Xu
Zhiyuan Liu
Maosong Sun
SILM
22
126
0
11 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
23
47
0
03 Jun 2021
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
Fanchao Qi
Yangyi Chen
Mukai Li
Yuan Yao
Zhiyuan Liu
Maosong Sun
AAML
28
261
0
20 Nov 2020
Mitigating backdoor attacks in LSTM-based Text Classification Systems by
  Backdoor Keyword Identification
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification
Chuanshuai Chen
Jiazhu Dai
SILM
55
126
0
11 Jul 2020
Clean-Label Backdoor Attacks on Video Recognition Models
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
190
274
0
06 Mar 2020
Adversarial Example Generation with Syntactically Controlled Paraphrase
  Networks
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks
Mohit Iyyer
John Wieting
Kevin Gimpel
Luke Zettlemoyer
AAML
GAN
190
711
0
17 Apr 2018
1