ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.08689
  4. Cited By
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

29 August 2017
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"

50 / 310 papers shown
An Overview of Backdoor Attacks Against Deep Neural Networks and
  Possible Defences
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences
Wei Guo
B. Tondi
Mauro Barni
AAML
266
98
0
16 Nov 2021
10 Security and Privacy Problems in Large Foundation Models
10 Security and Privacy Problems in Large Foundation Models
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
371
11
0
28 Oct 2021
CAPTIVE: Constrained Adversarial Perturbations to Thwart IC Reverse
  Engineering
CAPTIVE: Constrained Adversarial Perturbations to Thwart IC Reverse Engineering
Amir Hosein Afandizadeh Zargari
Marzieh Ashrafiamiri
Minjun Seo
Sai Manoj P D
M. Fouda
Fadi J. Kurdahi
AAML
107
4
0
21 Oct 2021
TESSERACT: Gradient Flip Score to Secure Federated Learning Against
  Model Poisoning Attacks
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks
Atul Sharma
Wei Chen
Joshua C. Zhao
Qiang Qiu
Somali Chaterji
S. Bagchi
FedMLAAML
152
5
0
19 Oct 2021
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
FooBaR: Fault Fooling Backdoor Attack on Neural Network TrainingIEEE Transactions on Dependable and Secure Computing (IEEE TDSC), 2021
J. Breier
Xiaolu Hou
Martín Ochoa
Jesus Solano
SILMAAML
270
12
0
23 Sep 2021
BFClass: A Backdoor-free Text Classification Framework
BFClass: A Backdoor-free Text Classification FrameworkConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Zichao Li
Dheeraj Mekala
Chengyu Dong
Jingbo Shang
SILM
172
32
0
22 Sep 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
276
20
0
20 Sep 2021
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Neil G. Marchant
Benjamin I. P. Rubinstein
Scott Alfeld
MUAAML
246
88
0
17 Sep 2021
How to Inject Backdoors with Better Consistency: Logit Anchoring on
  Clean Data
How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data
Zhiyuan Zhang
Lingjuan Lyu
Weiqiang Wang
Lichao Sun
Xu Sun
212
39
0
03 Sep 2021
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on
  Production Federated Learning
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated LearningIEEE Symposium on Security and Privacy (IEEE S&P), 2021
Virat Shejwalkar
Amir Houmansadr
Peter Kairouz
Daniel Ramage
AAML
378
274
0
23 Aug 2021
A Decentralized Federated Learning Framework via Committee Mechanism
  with Convergence Guarantee
A Decentralized Federated Learning Framework via Committee Mechanism with Convergence Guarantee
Chunjiang Che
Xiaoli Li
Chuan Chen
Xiaoyu He
Zibin Zheng
FedML
443
94
0
01 Aug 2021
Putting words into the system's mouth: A targeted attack on neural
  machine translation using monolingual data poisoning
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoning
Jun Wang
Chang Xu
Francisco Guzman
Ahmed El-Kishky
Yuqing Tang
Benjamin I. P. Rubinstein
Trevor Cohn
AAMLSILM
136
34
0
12 Jul 2021
Understanding the Limits of Unsupervised Domain Adaptation via Data
  Poisoning
Understanding the Limits of Unsupervised Domain Adaptation via Data PoisoningNeural Information Processing Systems (NeurIPS), 2021
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
AAML
263
26
0
08 Jul 2021
Evaluating the Cybersecurity Risk of Real World, Machine Learning
  Production Systems
Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems
Ron Bitton
Nadav Maman
Inderjeet Singh
Satoru Momiyama
Yuval Elovici
A. Shabtai
217
28
0
05 Jul 2021
The Threat of Offensive AI to Organizations
The Threat of Offensive AI to OrganizationsComputers & security (CS), 2021
Yisroel Mirsky
Ambra Demontis
J. Kotak
Ram Shankar
Deng Gelei
Liu Yang
Xinming Zhang
Wenke Lee
Yuval Elovici
Battista Biggio
209
101
0
30 Jun 2021
Poisoning the Search Space in Neural Architecture Search
Poisoning the Search Space in Neural Architecture Search
Robert Wu
Nayan Saxena
Rohan Jain
OODAAML
88
2
0
28 Jun 2021
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong PoisonsNeural Information Processing Systems (NeurIPS), 2021
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
290
156
0
21 Jun 2021
Accumulative Poisoning Attacks on Real-time Data
Accumulative Poisoning Attacks on Real-time DataNeural Information Processing Systems (NeurIPS), 2021
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
233
22
0
18 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
  Trained from Scratch
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
448
150
0
16 Jun 2021
Deep Learning for Predictive Analytics in Reversible Steganography
Deep Learning for Predictive Analytics in Reversible SteganographyIEEE Access (IEEE Access), 2021
Ching-Chun Chang
Xu Wang
Sisheng Chen
Isao Echizen
Victor Sanchez
Chang-Tsun Li
171
10
0
13 Jun 2021
Disrupting Model Training with Adversarial Shortcuts
Disrupting Model Training with Adversarial Shortcuts
Ivan Evtimov
Ian Covert
Aditya Kusupati
Tadayoshi Kohno
AAML
203
10
0
12 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Defending Against Backdoor Attacks in Natural Language GenerationAAAI Conference on Artificial Intelligence (AAAI), 2021
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Leilei Gan
Jiwei Li
Tianwei Zhang
AAMLSILM
278
59
0
03 Jun 2021
Gradient-based Data Subversion Attack Against Binary Classifiers
Gradient-based Data Subversion Attack Against Binary Classifiers
Rosni Vasu
Sanjay Seetharaman
Shubham Malaviya
Manish Shukla
S. Lodha
AAML
107
1
0
31 May 2021
A BIC-based Mixture Model Defense against Data Poisoning Attacks on
  Classifiers
A BIC-based Mixture Model Defense against Data Poisoning Attacks on ClassifiersInternational Workshop on Machine Learning for Signal Processing (MLSP), 2021
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
129
0
0
28 May 2021
Regularization Can Help Mitigate Poisoning Attacks... with the Right
  Hyperparameters
Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters
Javier Carnerero-Cano
Luis Muñoz-González
P. Spencer
Emil C. Lupu
AAML
184
11
0
23 May 2021
An End-to-End Framework for Molecular Conformation Generation via
  Bilevel Programming
An End-to-End Framework for Molecular Conformation Generation via Bilevel ProgrammingInternational Conference on Machine Learning (ICML), 2021
Minkai Xu
Wujie Wang
Shitong Luo
Chence Shi
Yoshua Bengio
Rafael Gómez-Bombarelli
Jian Tang
3DV
320
90
0
15 May 2021
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
De-Pois: An Attack-Agnostic Defense against Data Poisoning AttacksIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2021
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
165
103
0
08 May 2021
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data
  Commitment for Federated Learning
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning
Bo Zhao
Yang Liu
Liming Fang
Tao Wang
Ke Jiang
FedML
161
6
0
16 Apr 2021
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks
Nicolas Müller
Simon Roschmann
Konstantin Böttinger
AAML
200
0
0
14 Apr 2021
Privacy and Trust Redefined in Federated Machine Learning
Privacy and Trust Redefined in Federated Machine LearningMachine Learning and Knowledge Extraction (MLKE), 2021
Pavlos Papadopoulos
Will Abramson
A. Hall
Nikolaos Pitropakis
William J. Buchanan
211
48
0
29 Mar 2021
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison
  Linear Classifiers?
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?IEEE International Joint Conference on Neural Network (IJCNN), 2021
Antonio Emanuele Cinà
Sebastiano Vascon
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
165
13
0
23 Mar 2021
Explainable Adversarial Attacks in Deep Neural Networks Using Activation
  Profiles
Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles
G. Cantareira
R. Mello
F. Paulovich
AAML
156
10
0
18 Mar 2021
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with
  Differentially Private Data Augmentations
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations
Eitan Borgnia
Jonas Geiping
Valeriia Cherepanova
Liam H. Fowl
Arjun Gupta
Amin Ghiasi
Furong Huang
Micah Goldblum
Tom Goldstein
303
49
0
02 Mar 2021
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure
  Dataset Release
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Liam H. Fowl
Ping Yeh-Chiang
Micah Goldblum
Jonas Geiping
Arpit Bansal
W. Czaja
Tom Goldstein
218
46
0
16 Feb 2021
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial
  Training
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial TrainingNeural Information Processing Systems (NeurIPS), 2021
Lue Tao
Lei Feng
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
479
83
0
09 Feb 2021
Security and Privacy for Artificial Intelligence: Opportunities and
  Challenges
Security and Privacy for Artificial Intelligence: Opportunities and Challenges
Ayodeji Oseni
Nour Moustafa
Helge Janicke
Peng Liu
Z. Tari
A. Vasilakos
AAML
166
65
0
09 Feb 2021
Covert Model Poisoning Against Federated Learning: Algorithm Design and
  Optimization
Covert Model Poisoning Against Federated Learning: Algorithm Design and OptimizationIEEE Transactions on Dependable and Secure Computing (IEEE TDSC), 2021
Kang Wei
Jun Li
Ming Ding
Chuan Ma
Yo-Seb Jeon
H. Vincent Poor
FedML
159
12
0
28 Jan 2021
Untargeted Poisoning Attack Detection in Federated Learning via Behavior
  Attestation
Untargeted Poisoning Attack Detection in Federated Learning via Behavior AttestationIEEE Access (IEEE Access), 2021
Ranwa Al Mallah
David López
Godwin Badu-Marfo
Bilal Farooq
AAML
254
46
0
24 Jan 2021
On Provable Backdoor Defense in Collaborative Learning
On Provable Backdoor Defense in Collaborative Learning
Ximing Qiao
Yuhua Bai
S. Hu
Ang Li
Yiran Chen
Xue Yang
AAMLFedML
86
1
0
19 Jan 2021
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data UnexploitableInternational Conference on Learning Representations (ICLR), 2021
Hanxun Huang
Jiabo He
S. Erfani
James Bailey
Yisen Wang
MIACV
537
236
0
13 Jan 2021
Data Poisoning Attacks to Deep Learning Based Recommender Systems
Data Poisoning Attacks to Deep Learning Based Recommender SystemsNetwork and Distributed System Security Symposium (NDSS), 2021
Hai Huang
Jiaming Mu
Neil Zhenqiang Gong
Qi Li
Yinan Han
Mingwei Xu
AAML
229
151
0
07 Jan 2021
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
FLTrust: Byzantine-robust Federated Learning via Trust BootstrappingNetwork and Distributed System Security Symposium (NDSS), 2020
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
618
893
0
27 Dec 2020
Poisoning Attacks on Cyber Attack Detectors for Industrial Control
  Systems
Poisoning Attacks on Cyber Attack Detectors for Industrial Control SystemsACM Symposium on Applied Computing (SAC), 2020
Moshe Kravchik
Battista Biggio
A. Shabtai
AAML
162
36
0
23 Dec 2020
Selective Forgetting of Deep Networks at a Finer Level than Samples
Selective Forgetting of Deep Networks at a Finer Level than Samples
Tomohiro Hayase
S. Yasutomi
Takashi Katoh
172
15
0
22 Dec 2020
Hardware and Software Optimizations for Accelerating Deep Neural
  Networks: Survey of Current Trends, Challenges, and the Road Ahead
Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road AheadIEEE Access (IEEE Access), 2020
Maurizio Capra
Beatrice Bussolino
Alberto Marchisio
Guido Masera
Maurizio Martina
Mohamed Bennai
BDL
312
175
0
21 Dec 2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and DefensesIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
487
352
0
18 Dec 2020
FoggySight: A Scheme for Facial Lookup Privacy
FoggySight: A Scheme for Facial Lookup PrivacyProceedings on Privacy Enhancing Technologies (PoPETs), 2020
Ivan Evtimov
Pascal Sturmfels
Tadayoshi Kohno
PICVFedML
186
26
0
15 Dec 2020
Mitigating the Impact of Adversarial Attacks in Very Deep Networks
Mitigating the Impact of Adversarial Attacks in Very Deep Networks
Mohammed Hassanin
Ibrahim Radwan
Nour Moustafa
M. Tahtali
Neeraj Kumar
AAML
166
7
0
08 Dec 2020
Certified Robustness of Nearest Neighbors against Data Poisoning and
  Backdoor Attacks
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
380
88
0
07 Dec 2020
Privacy and Robustness in Federated Learning: Attacks and Defenses
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Jiabo He
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
610
479
0
07 Dec 2020
Previous
1234567
Next
Page 4 of 7