Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.01761
Cited By
Can Adversarial Weight Perturbations Inject Neural Backdoors?
4 August 2020
Siddhant Garg
Adarsh Kumar
Vibhor Goel
Yingyu Liang
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Can Adversarial Weight Perturbations Inject Neural Backdoors?"
6 / 56 papers shown
Title
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models
Wenkai Yang
Lei Li
Zhiyuan Zhang
Xuancheng Ren
Xu Sun
Bin He
SILM
16
146
0
29 Mar 2021
BERT & Family Eat Word Salad: Experiments with Text Understanding
Ashim Gupta
Giorgi Kvernadze
Vivek Srikumar
195
73
0
10 Jan 2021
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
35
585
0
17 Jul 2020
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
177
1,031
0
29 Nov 2018
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
243
914
0
21 Apr 2018
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
250
13,347
0
25 Aug 2014
Previous
1
2