ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.08689
  4. Cited By
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

29 August 2017
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"

50 / 310 papers shown
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
452
72
0
06 Nov 2019
Data Poisoning Attacks to Local Differential Privacy Protocols
Data Poisoning Attacks to Local Differential Privacy ProtocolsUSENIX Security Symposium (USENIX Security), 2019
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
326
94
0
05 Nov 2019
Detecting AI Trojans Using Meta Neural Analysis
Detecting AI Trojans Using Meta Neural AnalysisIEEE Symposium on Security and Privacy (IEEE S&P), 2019
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Yue Liu
320
364
0
08 Oct 2019
Hidden Trigger Backdoor Attacks
Hidden Trigger Backdoor AttacksAAAI Conference on Artificial Intelligence (AAAI), 2019
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
513
696
0
30 Sep 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for
  Embedded Neural Networks
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural NetworksInternational Conference on Cyberworlds (ICC), 2019
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAMLMQ
233
18
0
27 Sep 2019
Defending against Machine Learning based Inference Attacks via
  Adversarial Examples: Opportunities and Challenges
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges
Jinyuan Jia
Neil Zhenqiang Gong
AAMLSILM
170
20
0
17 Sep 2019
Byzantine-Robust Federated Machine Learning through Adaptive Model
  Averaging
Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging
Luis Muñoz-González
Kenneth T. Co
Emil C. Lupu
FedML
190
207
0
11 Sep 2019
On Defending Against Label Flipping Attacks on Malware Detection Systems
On Defending Against Label Flipping Attacks on Malware Detection Systems
R. Taheri
R. Javidan
Mohammad Shojafar
Zahra Pooranian
A. Miri
Mauro Conti
AAML
260
98
0
13 Aug 2019
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNsComputer Vision and Pattern Recognition (CVPR), 2019
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
270
252
0
26 Jun 2019
Poisoning Attacks with Generative Adversarial Nets
Poisoning Attacks with Generative Adversarial Nets
Luis Muñoz-González
Bjarne Pfitzner
Matteo Russo
Javier Carnerero-Cano
Emil C. Lupu
AAML
179
68
0
18 Jun 2019
Understanding artificial intelligence ethics and safety
Understanding artificial intelligence ethics and safetySocial Science Research Network (SSRN), 2019
David Leslie
FaMLAI4TS
170
439
0
11 Jun 2019
Making targeted black-box evasion attacks effective and efficient
Making targeted black-box evasion attacks effective and efficient
Mika Juuti
B. Atli
Nadarajah Asokan
AAMLMIACVMLAU
106
9
0
08 Jun 2019
Mixed Strategy Game Model Against Data Poisoning Attacks
Mixed Strategy Game Model Against Data Poisoning Attacks
Yi-Tsen Ou
Reza Samavi
AAML
79
4
0
07 Jun 2019
DAWN: Dynamic Adversarial Watermarking of Neural Networks
DAWN: Dynamic Adversarial Watermarking of Neural NetworksACM Multimedia (ACM MM), 2019
S. Szyller
B. Atli
Samuel Marchal
Nadarajah Asokan
MLAUAAML
310
210
0
03 Jun 2019
An Investigation of Data Poisoning Defenses for Online Learning
An Investigation of Data Poisoning Defenses for Online Learning
Yizhen Wang
Somesh Jha
Kamalika Chaudhuri
AAML
118
5
0
28 May 2019
Learning to Confuse: Generating Training Time Adversarial Data with
  Auto-Encoder
Learning to Confuse: Generating Training Time Adversarial Data with Auto-EncoderNeural Information Processing Systems (NeurIPS), 2019
Ji Feng
Qi-Zhi Cai
Zhi Zhou
AAML
134
115
0
22 May 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security
  Vulnerabilities of Transfer Learning
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILMAAML
283
47
0
08 Apr 2019
Learning Discrete Structures for Graph Neural Networks
Learning Discrete Structures for Graph Neural Networks
Luca Franceschi
Mathias Niepert
Massimiliano Pontil
X. He
GNN
309
466
0
28 Mar 2019
Data Poisoning against Differentially-Private Learners: Attacks and
  Defenses
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
Yuzhe Ma
Xiaojin Zhu
Justin Hsu
SILM
184
171
0
23 Mar 2019
Online Data Poisoning Attack
Online Data Poisoning Attack
Xuezhou Zhang
Xiaojin Zhu
Laurent Lessard
AAML
130
28
0
05 Mar 2019
Evaluating Differentially Private Machine Learning in Practice
Evaluating Differentially Private Machine Learning in Practice
Bargav Jayaraman
David Evans
302
7
0
24 Feb 2019
Adversarial Attacks on Graph Neural Networks via Meta Learning
Adversarial Attacks on Graph Neural Networks via Meta Learning
Daniel Zügner
Stephan Günnemann
OODAAMLGNN
354
627
0
22 Feb 2019
A new Backdoor Attack in CNNs by training set corruption without label
  poisoning
A new Backdoor Attack in CNNs by training set corruption without label poisoningInternational Conference on Information Photonics (ICIP), 2019
Mauro Barni
Kassem Kallas
B. Tondi
AAML
261
423
0
12 Feb 2019
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAMLAI4CE
235
186
0
03 Dec 2018
Model-Reuse Attacks on Deep Learning Systems
Model-Reuse Attacks on Deep Learning Systems
Yujie Ji
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
SILMAAML
427
194
0
02 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
670
1,217
0
29 Nov 2018
Dataset Distillation
Dataset Distillation
Tongzhou Wang
Jun-Yan Zhu
Antonio Torralba
Alexei A. Efros
DD
317
328
0
27 Nov 2018
Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning
Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning
Muhammad Shayan
Clement Fung
Chris J. M. Yoon
Ivan Beschastnikh
FedML
191
88
0
24 Nov 2018
Detecting Backdoor Attacks on Deep Neural Networks by Activation
  Clustering
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
255
878
0
09 Nov 2018
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep
  Neural Networks
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks
Faiq Khalid
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
203
21
0
02 Nov 2018
Towards Adversarial Malware Detection: Lessons Learned from PDF-based
  Attacks
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks
Davide Maiorca
Battista Biggio
Giorgio Giacinto
AAML
260
51
0
02 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Abigail Z. Jacobs
281
268
0
02 Nov 2018
Formal Verification of Neural Network Controlled Autonomous Systems
Formal Verification of Neural Network Controlled Autonomous Systems
Xiaowu Sun
Haitham Khedr
Yasser Shoukry
206
148
0
31 Oct 2018
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep
  Convolutional Networks
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
Kenneth T. Co
Luis Muñoz-González
Sixte de Maupeou
Emil C. Lupu
AAML
411
73
0
30 Sep 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILMAAML
312
11
0
08 Sep 2018
Adversarial Attacks on Node Embeddings via Graph Poisoning
Adversarial Attacks on Node Embeddings via Graph Poisoning
Aleksandar Bojchevski
Stephan Günnemann
AAML
263
330
0
04 Sep 2018
Have You Stolen My Model? Evasion Attacks Against Deep Neural Network
  Watermarking Techniques
Have You Stolen My Model? Evasion Attacks Against Deep Neural Network Watermarking Techniques
Dorjan Hitaj
L. Mancini
AAML
153
54
0
03 Sep 2018
Backdoor Embedding in Convolutional Neural Network Models via Invisible
  Perturbation
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation
C. Liao
Haoti Zhong
Anna Squicciarini
Sencun Zhu
David J. Miller
SILM
180
340
0
30 Aug 2018
Data Poisoning Attacks against Online Learning
Data Poisoning Attacks against Online Learning
Yizhen Wang
Kamalika Chaudhuri
AAML
143
98
0
27 Aug 2018
Are You Tampering With My Data?
Are You Tampering With My Data?
Michele Alberti
Vinaychandran Pondenkandath
Marcel Würsch
Manuel Bouillon
Mathias Seuret
Rolf Ingold
Marcus Liwicki
AAML
181
20
0
21 Aug 2018
Mitigation of Adversarial Attacks through Embedded Feature Selection
Mitigation of Adversarial Attacks through Embedded Feature Selection
Ziyi Bao
Luis Muñoz-González
Emil C. Lupu
AAML
85
1
0
16 Aug 2018
VerIDeep: Verifying Integrity of Deep Neural Networks through
  Sensitive-Sample Fingerprinting
VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting
Zecheng He
Tianwei Zhang
R. Lee
FedMLAAMLMLAU
145
21
0
09 Aug 2018
Security and Privacy Issues in Deep Learning
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILMMIACV
314
87
0
31 Jul 2018
Adversarial Robustness Toolbox v1.0.0
Adversarial Robustness Toolbox v1.0.0
Maria-Irina Nicolae
M. Sinn
Minh-Ngoc Tran
Beat Buesser
Ambrish Rawat
...
Nathalie Baracaldo
Bryant Chen
Heiko Ludwig
Ian Molloy
Ben Edwards
AAMLVLM
420
527
0
03 Jul 2018
Built-in Vulnerabilities to Imperceptible Adversarial Perturbations
Built-in Vulnerabilities to Imperceptible Adversarial Perturbations
T. Tanay
Jerone T. A. Andrews
Lewis D. Griffin
156
7
0
19 Jun 2018
Killing four birds with one Gaussian process: the relation between
  different test-time attacks
Killing four birds with one Gaussian process: the relation between different test-time attacks
Kathrin Grosse
M. Smith
Michael Backes
AAML
177
2
0
06 Jun 2018
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural
  Networks
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
240
1,198
0
30 May 2018
PRADA: Protecting against DNN Model Stealing Attacks
PRADA: Protecting against DNN Model Stealing Attacks
Mika Juuti
S. Szyller
Samuel Marchal
Nadarajah Asokan
SILMAAML
495
485
0
07 May 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
692
1,206
0
03 Apr 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
317
838
0
01 Apr 2018
Previous
1234567
Next