ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.08689
  4. Cited By
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

29 August 2017
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"

50 / 310 papers shown
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
How Robust are Randomized Smoothing based Defenses to Data Poisoning?Computer Vision and Pattern Recognition (CVPR), 2020
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
OODAAML
319
33
0
02 Dec 2020
Omni: Automated Ensemble with Unexpected Models against Adversarial
  Evasion Attack
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion AttackEmpirical Software Engineering (EMSE), 2020
Rui Shu
Tianpei Xia
Laurie A. Williams
Tim Menzies
AAML
177
19
0
23 Nov 2020
Backdoor Attacks on the DNN Interpretation System
Backdoor Attacks on the DNN Interpretation SystemAAAI Conference on Artificial Intelligence (AAAI), 2020
Shihong Fang
A. Choromańska
FAttAAML
201
22
0
21 Nov 2020
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush
  Deep Neural Network in Multi-Tenant FPGA
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA
Adnan Siraj Rakin
Yukui Luo
Xiaolin Xu
Deliang Fan
AAML
257
56
0
05 Nov 2020
A Targeted Attack on Black-Box Neural Machine Translation with Parallel
  Data Poisoning
A Targeted Attack on Black-Box Neural Machine Translation with Parallel Data Poisoning
Chang Xu
Jun Wang
Yuqing Tang
Francisco Guzman
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
223
7
0
02 Nov 2020
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor AttacksIEEE transactions on neural systems and rehabilitation engineering (TNSRE), 2020
Lubin Meng
Jian Huang
Zhigang Zeng
Xue Jiang
Shan Yu
T. Jung
Chin-Teng Lin
Ricardo Chavarriaga
Dongrui Wu
AAML
281
38
0
30 Oct 2020
Being Single Has Benefits. Instance Poisoning to Deceive Malware
  Classifiers
Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers
T. Shapira
David Berend
Ishai Rosenberg
Yang Liu
A. Shabtai
Yuval Elovici
AAML
104
4
0
30 Oct 2020
Pair the Dots: Jointly Examining Training History and Test Stimuli for
  Model Interpretability
Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability
Yuxian Meng
Chun Fan
Zijun Sun
Eduard H. Hovy
Leilei Gan
Jiwei Li
FAtt
279
10
0
14 Oct 2020
Data Poisoning Attacks on Regression Learning and Corresponding Defenses
Data Poisoning Attacks on Regression Learning and Corresponding DefensesPacific Rim International Symposium on Dependable Computing (PRDC), 2020
Nicolas Müller
Daniel Kowatsch
Konstantin Böttinger
AAML
125
23
0
15 Sep 2020
Certified Robustness of Graph Classification against Topology Attack
  with Randomized Smoothing
Certified Robustness of Graph Classification against Topology Attack with Randomized SmoothingGlobal Communications Conference (GLOBECOM), 2020
Zhidong Gao
Rui Hu
Yanmin Gong
AAMLOOD
133
17
0
12 Sep 2020
Review and Critical Analysis of Privacy-preserving Infection Tracking
  and Contact Tracing
Review and Critical Analysis of Privacy-preserving Infection Tracking and Contact TracingFrontiers in Communications and Networks (FCN), 2020
William J. Buchanan
Muhammad Ali Imran
M. Rehman
Lei Zhang
Q. Abbasi
C. Chrysoulas
D. Haynes
Nikolaos Pitropakis
Pavlos Papadopoulos
144
15
0
10 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient MatchingInternational Conference on Learning Representations (ICLR), 2020
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
327
253
0
04 Sep 2020
Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown
  Dynamics
Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown DynamicsInternational Conference on Learning Representations (ICLR), 2020
Yanchao Sun
Da Huo
Furong Huang
AAMLOffRLOnRL
344
54
0
02 Sep 2020
Revisiting Adversarially Learned Injection Attacks Against Recommender
  Systems
Revisiting Adversarially Learned Injection Attacks Against Recommender SystemsACM Conference on Recommender Systems (RecSys), 2020
Jiaxi Tang
Hongyi Wen
Ke Wang
AAML
249
91
0
11 Aug 2020
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
Intrinsic Certified Robustness of Bagging against Data Poisoning AttacksAAAI Conference on Artificial Intelligence (AAAI), 2020
Jinyuan Jia
Xiaoyu Cao
Neil Zhenqiang Gong
SILM
404
151
0
11 Aug 2020
Federated Learning via Synthetic Data
Federated Learning via Synthetic Data
Jack Goetz
Ambuj Tewari
FedMLDD
131
82
0
11 Aug 2020
Blackbox Trojanising of Deep Learning Models : Using non-intrusive
  network structure and binary alterations
Blackbox Trojanising of Deep Learning Models : Using non-intrusive network structure and binary alterationsIEEE Region 10 Conference (TENCON), 2020
Jonathan Pan
AAML
207
3
0
02 Aug 2020
Towards Class-Oriented Poisoning Attacks Against Neural Networks
Towards Class-Oriented Poisoning Attacks Against Neural NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2020
Bingyin Zhao
Yingjie Lao
SILMAAML
90
21
0
31 Jul 2020
A General Framework For Detecting Anomalous Inputs to DNN Classifiers
A General Framework For Detecting Anomalous Inputs to DNN ClassifiersInternational Conference on Machine Learning (ICML), 2020
Jayaram Raghuram
Varun Chandrasekaran
S. Jha
Suman Banerjee
AAML
250
38
0
29 Jul 2020
Transfer Learning without Knowing: Reprogramming Black-box Machine
  Learning Models with Scarce Data and Limited Resources
Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited ResourcesInternational Conference on Machine Learning (ICML), 2020
Yun-Yun Tsai
Pin-Yu Chen
Tsung-Yi Ho
AAMLMLAUBDL
328
108
0
17 Jul 2020
Data Poisoning Attacks Against Federated Learning Systems
Data Poisoning Attacks Against Federated Learning SystemsEuropean Symposium on Research in Computer Security (ESORICS), 2020
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
293
829
0
16 Jul 2020
Model-Targeted Poisoning Attacks with Provable Convergence
Model-Targeted Poisoning Attacks with Provable Convergence
Fnu Suya
Saeed Mahloujifar
Anshuman Suri
David Evans
Yuan Tian
AAML
145
5
0
30 Jun 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning AttacksInternational Conference on Machine Learning (ICML), 2020
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAMLTDI
307
190
0
22 Jun 2020
With Great Dispersion Comes Greater Resilience: Efficient Poisoning
  Attacks and Defenses for Linear Regression Models
With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Linear Regression Models
Jialin Wen
Benjamin Zi Hao Zhao
Minhui Xue
Alina Oprea
Hai-feng Qian
AAML
297
20
0
21 Jun 2020
OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives
  Training
OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives Training
Eran Segalis
Eran Galili
155
20
0
17 Jun 2020
Secure Byzantine-Robust Machine Learning
Secure Byzantine-Robust Machine Learning
Lie He
Sai Praneeth Karimireddy
Martin Jaggi
OOD
221
66
0
08 Jun 2020
Picket: Guarding Against Corrupted Data in Tabular Data during Learning
  and Inference
Picket: Guarding Against Corrupted Data in Tabular Data during Learning and Inference
Zifan Liu
Zhechun Zhou
Theodoros Rekatsinas
122
24
0
08 Jun 2020
A Distributed Trust Framework for Privacy-Preserving Machine Learning
A Distributed Trust Framework for Privacy-Preserving Machine LearningTrust and Privacy in Digital Business (TPDB), 2020
Will Abramson
A. Hall
Pavlos Papadopoulos
Nikolaos Pitropakis
William J. Buchanan
112
22
0
03 Jun 2020
Arms Race in Adversarial Malware Detection: A Survey
Arms Race in Adversarial Malware Detection: A Survey
Deqiang Li
Qianmu Li
Yanfang Ye
Shouhuai Xu
AAML
269
56
0
24 May 2020
A Review of Computer Vision Methods in Network Security
A Review of Computer Vision Methods in Network Security
Jiawei Zhao
Rahat Masood
Suranga Seneviratne
AAML
157
55
0
07 May 2020
Live Trojan Attacks on Deep Neural Networks
Live Trojan Attacks on Deep Neural Networks
Robby Costales
Chengzhi Mao
R. Norwitz
Bryan Kim
Junfeng Yang
AAML
279
25
0
22 Apr 2020
Poisoning Attacks on Algorithmic Fairness
Poisoning Attacks on Algorithmic Fairness
David Solans
Battista Biggio
Carlos Castillo
AAML
182
93
0
15 Apr 2020
Weight Poisoning Attacks on Pre-trained Models
Weight Poisoning Attacks on Pre-trained ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2020
Keita Kurita
Paul Michel
Graham Neubig
AAMLSILM
307
533
0
14 Apr 2020
Extending Adversarial Attacks to Produce Adversarial Class Probability
  Distributions
Extending Adversarial Attacks to Produce Adversarial Class Probability DistributionsJournal of machine learning research (JMLR), 2020
Jon Vadillo
Roberto Santana
Jose A. Lozano
AAML
240
0
0
14 Apr 2020
Practical Data Poisoning Attack against Next-Item Recommendation
Practical Data Poisoning Attack against Next-Item RecommendationThe Web Conference (WWW), 2020
Hengtong Zhang
Yaliang Li
Bolin Ding
Jing Gao
AAML
132
80
0
07 Apr 2020
MetaPoison: Practical General-purpose Clean-label Data Poisoning
MetaPoison: Practical General-purpose Clean-label Data PoisoningNeural Information Processing Systems (NeurIPS), 2020
Wenjie Huang
Jonas Geiping
Liam H. Fowl
Gavin Taylor
Tom Goldstein
290
217
0
01 Apr 2020
RAB: Provable Robustness Against Backdoor Attacks
RAB: Provable Robustness Against Backdoor AttacksIEEE Symposium on Security and Privacy (IEEE S&P), 2020
Maurice Weber
Xiaojun Xu
Bojan Karlas
Ce Zhang
Yue Liu
AAML
591
183
0
19 Mar 2020
Investigating Generalization in Neural Networks under Optimally Evolved
  Training Perturbations
Investigating Generalization in Neural Networks under Optimally Evolved Training PerturbationsIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
Subhajit Chaudhury
T. Yamasaki
101
3
0
14 Mar 2020
Explanation-Guided Backdoor Poisoning Attacks Against Malware
  Classifiers
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAMLSILM
203
18
0
02 Mar 2020
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on
  Multiobjective Bilevel Optimisation
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation
Javier Carnerero-Cano
Luis Muñoz-González
P. Spencer
Emil C. Lupu
AAML
306
12
0
28 Feb 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient
  Shaping
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
207
150
0
26 Feb 2020
Defending against Backdoor Attack on Deep Neural Networks
Defending against Backdoor Attack on Deep Neural Networks
Kaidi Xu
Sijia Liu
Pin-Yu Chen
Pu Zhao
Xinyu Lin
Xue Lin
AAML
368
56
0
26 Feb 2020
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Certified Robustness to Label-Flipping Attacks via Randomized SmoothingInternational Conference on Machine Learning (ICML), 2020
Elan Rosenfeld
Ezra Winston
Pradeep Ravikumar
J. Zico Kolter
OODAAML
461
172
0
07 Feb 2020
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based
  Anomaly Detectors to Adversarial Poisoning Attacks
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks
Moshe Kravchik
A. Shabtai
AAML
146
1
0
07 Feb 2020
Backdoor Attacks against Transfer Learning with Pre-trained Deep
  Learning Models
Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning ModelsIEEE Transactions on Services Computing (TSC), 2020
Shuo Wang
Surya Nepal
Carsten Rudolph
M. Grobler
Shangyu Chen
Tianle Chen
AAML
183
117
0
10 Jan 2020
Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box
  Knowledge Transfer
Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer
Hong Chang
Virat Shejwalkar
Reza Shokri
Amir Houmansadr
FedML
232
188
0
24 Dec 2019
Towards Security Threats of Deep Learning Systems: A Survey
Towards Security Threats of Deep Learning Systems: A Survey
Yingzhe He
Guozhu Meng
Kai Chen
Xingbo Hu
Jinwen He
AAMLELM
247
15
0
28 Nov 2019
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Local Model Poisoning Attacks to Byzantine-Robust Federated LearningUSENIX Security Symposium (USENIX Security), 2019
Minghong Fang
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAMLOODFedML
531
1,406
0
26 Nov 2019
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems
  With Limited Data
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited DataACM Asia Conference on Computer and Communications Security (AsiaCCS), 2019
Xinyun Chen
Wenxiao Wang
Chris Bender
Yiming Ding
R. Jia
Yue Liu
Basel Alomair
AAML
208
115
0
17 Nov 2019
Penalty Method for Inversion-Free Deep Bilevel Optimization
Penalty Method for Inversion-Free Deep Bilevel OptimizationAsian Conference on Machine Learning (ACML), 2019
Akshay Mehra
Jihun Hamm
830
52
0
08 Nov 2019
Previous
1234567
Next