Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1708.08689
Cited By
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
29 August 2017
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"
50 / 310 papers shown
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
452
72
0
06 Nov 2019
Data Poisoning Attacks to Local Differential Privacy Protocols
USENIX Security Symposium (USENIX Security), 2019
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
326
94
0
05 Nov 2019
Detecting AI Trojans Using Meta Neural Analysis
IEEE Symposium on Security and Privacy (IEEE S&P), 2019
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Yue Liu
320
364
0
08 Oct 2019
Hidden Trigger Backdoor Attacks
AAAI Conference on Artificial Intelligence (AAAI), 2019
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
513
696
0
30 Sep 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
International Conference on Cyberworlds (ICC), 2019
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
233
18
0
27 Sep 2019
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges
Jinyuan Jia
Neil Zhenqiang Gong
AAML
SILM
170
20
0
17 Sep 2019
Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging
Luis Muñoz-González
Kenneth T. Co
Emil C. Lupu
FedML
190
207
0
11 Sep 2019
On Defending Against Label Flipping Attacks on Malware Detection Systems
R. Taheri
R. Javidan
Mohammad Shojafar
Zahra Pooranian
A. Miri
Mauro Conti
AAML
260
98
0
13 Aug 2019
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Computer Vision and Pattern Recognition (CVPR), 2019
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
270
252
0
26 Jun 2019
Poisoning Attacks with Generative Adversarial Nets
Luis Muñoz-González
Bjarne Pfitzner
Matteo Russo
Javier Carnerero-Cano
Emil C. Lupu
AAML
179
68
0
18 Jun 2019
Understanding artificial intelligence ethics and safety
Social Science Research Network (SSRN), 2019
David Leslie
FaML
AI4TS
170
439
0
11 Jun 2019
Making targeted black-box evasion attacks effective and efficient
Mika Juuti
B. Atli
Nadarajah Asokan
AAML
MIACV
MLAU
106
9
0
08 Jun 2019
Mixed Strategy Game Model Against Data Poisoning Attacks
Yi-Tsen Ou
Reza Samavi
AAML
79
4
0
07 Jun 2019
DAWN: Dynamic Adversarial Watermarking of Neural Networks
ACM Multimedia (ACM MM), 2019
S. Szyller
B. Atli
Samuel Marchal
Nadarajah Asokan
MLAU
AAML
310
210
0
03 Jun 2019
An Investigation of Data Poisoning Defenses for Online Learning
Yizhen Wang
Somesh Jha
Kamalika Chaudhuri
AAML
118
5
0
28 May 2019
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Neural Information Processing Systems (NeurIPS), 2019
Ji Feng
Qi-Zhi Cai
Zhi Zhou
AAML
134
115
0
22 May 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILM
AAML
283
47
0
08 Apr 2019
Learning Discrete Structures for Graph Neural Networks
Luca Franceschi
Mathias Niepert
Massimiliano Pontil
X. He
GNN
309
466
0
28 Mar 2019
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
Yuzhe Ma
Xiaojin Zhu
Justin Hsu
SILM
184
171
0
23 Mar 2019
Online Data Poisoning Attack
Xuezhou Zhang
Xiaojin Zhu
Laurent Lessard
AAML
130
28
0
05 Mar 2019
Evaluating Differentially Private Machine Learning in Practice
Bargav Jayaraman
David Evans
302
7
0
24 Feb 2019
Adversarial Attacks on Graph Neural Networks via Meta Learning
Daniel Zügner
Stephan Günnemann
OOD
AAML
GNN
354
627
0
22 Feb 2019
A new Backdoor Attack in CNNs by training set corruption without label poisoning
International Conference on Information Photonics (ICIP), 2019
Mauro Barni
Kassem Kallas
B. Tondi
AAML
261
423
0
12 Feb 2019
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
235
186
0
03 Dec 2018
Model-Reuse Attacks on Deep Learning Systems
Yujie Ji
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
SILM
AAML
427
194
0
02 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
670
1,217
0
29 Nov 2018
Dataset Distillation
Tongzhou Wang
Jun-Yan Zhu
Antonio Torralba
Alexei A. Efros
DD
317
328
0
27 Nov 2018
Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning
Muhammad Shayan
Clement Fung
Chris J. M. Yoon
Ivan Beschastnikh
FedML
191
88
0
24 Nov 2018
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
255
878
0
09 Nov 2018
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks
Faiq Khalid
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
203
21
0
02 Nov 2018
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks
Davide Maiorca
Battista Biggio
Giorgio Giacinto
AAML
260
51
0
02 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Abigail Z. Jacobs
281
268
0
02 Nov 2018
Formal Verification of Neural Network Controlled Autonomous Systems
Xiaowu Sun
Haitham Khedr
Yasser Shoukry
206
148
0
31 Oct 2018
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
Kenneth T. Co
Luis Muñoz-González
Sixte de Maupeou
Emil C. Lupu
AAML
411
73
0
30 Sep 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
312
11
0
08 Sep 2018
Adversarial Attacks on Node Embeddings via Graph Poisoning
Aleksandar Bojchevski
Stephan Günnemann
AAML
263
330
0
04 Sep 2018
Have You Stolen My Model? Evasion Attacks Against Deep Neural Network Watermarking Techniques
Dorjan Hitaj
L. Mancini
AAML
153
54
0
03 Sep 2018
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation
C. Liao
Haoti Zhong
Anna Squicciarini
Sencun Zhu
David J. Miller
SILM
180
340
0
30 Aug 2018
Data Poisoning Attacks against Online Learning
Yizhen Wang
Kamalika Chaudhuri
AAML
143
98
0
27 Aug 2018
Are You Tampering With My Data?
Michele Alberti
Vinaychandran Pondenkandath
Marcel Würsch
Manuel Bouillon
Mathias Seuret
Rolf Ingold
Marcus Liwicki
AAML
181
20
0
21 Aug 2018
Mitigation of Adversarial Attacks through Embedded Feature Selection
Ziyi Bao
Luis Muñoz-González
Emil C. Lupu
AAML
85
1
0
16 Aug 2018
VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting
Zecheng He
Tianwei Zhang
R. Lee
FedML
AAML
MLAU
145
21
0
09 Aug 2018
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILM
MIACV
314
87
0
31 Jul 2018
Adversarial Robustness Toolbox v1.0.0
Maria-Irina Nicolae
M. Sinn
Minh-Ngoc Tran
Beat Buesser
Ambrish Rawat
...
Nathalie Baracaldo
Bryant Chen
Heiko Ludwig
Ian Molloy
Ben Edwards
AAML
VLM
420
527
0
03 Jul 2018
Built-in Vulnerabilities to Imperceptible Adversarial Perturbations
T. Tanay
Jerone T. A. Andrews
Lewis D. Griffin
156
7
0
19 Jun 2018
Killing four birds with one Gaussian process: the relation between different test-time attacks
Kathrin Grosse
M. Smith
Michael Backes
AAML
177
2
0
06 Jun 2018
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
240
1,198
0
30 May 2018
PRADA: Protecting against DNN Model Stealing Attacks
Mika Juuti
S. Szyller
Samuel Marchal
Nadarajah Asokan
SILM
AAML
495
485
0
07 May 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
692
1,206
0
03 Apr 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
317
838
0
01 Apr 2018
Previous
1
2
3
4
5
6
7
Next