Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2107.05243
Cited By
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoning
12 July 2021
Jun Wang
Chang Xu
Francisco Guzman
Ahmed El-Kishky
Yuqing Tang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoning"
21 / 21 papers shown
Title
Char-mander Use mBackdoor! A Study of Cross-lingual Backdoor Attacks in Multilingual LLMs
Himanshu Beniwal
Sailesh Panda
Birudugadda Srivibhav
Mayank Singh
50
0
0
24 Feb 2025
AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
Pankayaraj Pathmanathan
Udari Madhushani Sehwag
Michael-Andrei Panaitescu-Liess
Furong Huang
SILM
AAML
43
0
0
15 Oct 2024
Clean Label Attacks against SLU Systems
Lin Zhang
Sonal Joshi
Thomas Thebaud
Jesus Villalba
Najim Dehak
Sanjeev Khudanpur
AAML
37
0
0
13 Sep 2024
Backdoor Attack on Multilingual Machine Translation
Jun Wang
Qiongkai Xu
Xuanli He
Benjamin I. P. Rubinstein
Trevor Cohn
26
5
0
03 Apr 2024
Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code
Cristina Improta
SILM
AAML
28
9
0
11 Mar 2024
Manipulating Predictions over Discrete Inputs in Machine Teaching
Xiaodong Wu
Yufei Han
H. Dahrouj
Jianbing Ni
Zhenwen Liang
Xiangliang Zhang
26
0
0
31 Jan 2024
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models
Jiong Wang
Junlin Wu
Muhao Chen
Yevgeniy Vorobeychik
Chaowei Xiao
AAML
29
13
0
16 Nov 2023
Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
Tianjian Li
Haoran Xu
Philipp Koehn
Daniel Khashabi
Kenton W. Murray
38
4
0
02 Oct 2023
Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review
Pengzhou Cheng
Zongru Wu
Wei Du
Haodong Zhao
Wei Lu
Gongshen Liu
SILM
AAML
42
18
0
12 Sep 2023
Is the U.S. Legal System Ready for AI's Challenges to Human Values?
Inyoung Cheong
Aylin Caliskan
Tadayoshi Kohno
SILM
ELM
AILaw
30
1
0
30 Aug 2023
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks
Domenico Cotroneo
Cristina Improta
Pietro Liguori
R. Natella
SILM
38
23
0
04 Aug 2023
IMBERT: Making BERT Immune to Insertion-based Backdoor Attacks
Xuanli He
Jun Wang
Benjamin I. P. Rubinstein
Trevor Cohn
SILM
34
12
0
25 May 2023
Backdoor Attacks with Input-unique Triggers in NLP
Xukun Zhou
Jiwei Li
Tianwei Zhang
Lingjuan Lyu
Muqiao Yang
Jun He
SILM
AAML
30
9
0
25 Mar 2023
Mithridates: Auditing and Boosting Backdoor Resistance of Machine Learning Pipelines
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
29
2
0
09 Feb 2023
The unreasonable effectiveness of few-shot learning for machine translation
Xavier Garcia
Yamini Bansal
Colin Cherry
George F. Foster
M. Krikun
Fan Feng
Melvin Johnson
Orhan Firat
40
103
0
02 Feb 2023
TransFool: An Adversarial Attack against Neural Machine Translation Models
Sahar Sadrizadeh
Ljiljana Dolamic
P. Frossard
SILM
AAML
49
12
0
02 Feb 2023
Detecting Backdoors in Deep Text Classifiers
Youyan Guo
Jun Wang
Trevor Cohn
SILM
42
1
0
11 Oct 2022
CATER: Intellectual Property Protection on Text Generation APIs via Conditional Watermarks
Xuanli He
Qiongkai Xu
Yi Zeng
Lingjuan Lyu
Fangzhao Wu
Jiwei Li
R. Jia
WaLM
188
72
0
19 Sep 2022
Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
33
78
0
09 Dec 2021
Triggerless Backdoor Attack for NLP Tasks with Clean Labels
Leilei Gan
Jiwei Li
Tianwei Zhang
Xiaoya Li
Yuxian Meng
Fei Wu
Yi Yang
Shangwei Guo
Chun Fan
AAML
SILM
27
74
0
15 Nov 2021
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
29
18
0
23 Oct 2020
1