ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.08689
  4. Cited By
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

29 August 2017
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"

50 / 310 papers shown
Vicious Classifiers: Data Reconstruction Attack at Inference Time
Vicious Classifiers: Data Reconstruction Attack at Inference Time
Mohammad Malekzadeh
Deniz Gunduz
AAMLMIACV
144
1
0
08 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
157
6
0
06 Dec 2022
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder
ConfounderGAN: Protecting Image Data Privacy with Causal ConfounderNeural Information Processing Systems (NeurIPS), 2022
Qi Tian
Kun Kuang
Ke Jiang
Furui Liu
Zhihua Wang
Leilei Gan
164
8
0
04 Dec 2022
The Perils of Learning From Unlabeled Data: Backdoor Attacks on
  Semi-supervised Learning
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised LearningIEEE International Conference on Computer Vision (ICCV), 2022
Virat Shejwalkar
Lingjuan Lyu
Amir Houmansadr
AAML
196
14
0
01 Nov 2022
Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR)
  for Metaverses
Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR) for MetaversesACM Computing Surveys (ACM CSUR), 2022
Adnan Qayyum
M. A. Butt
Hassan Ali
Muhammad Usman
O. Halabi
Ala I. Al-Fuqaha
Q. Abbasi
Muhammad Ali Imran
Junaid Qadir
231
58
0
24 Oct 2022
Evolution of Neural Tangent Kernels under Benign and Adversarial
  Training
Evolution of Neural Tangent Kernels under Benign and Adversarial TrainingNeural Information Processing Systems (NeurIPS), 2022
Noel Loo
Ramin Hasani
Alexander Amini
Daniela Rus
AAML
239
14
0
21 Oct 2022
New data poison attacks on machine learning classifiers for mobile
  exfiltration
New data poison attacks on machine learning classifiers for mobile exfiltration
M. A. Ramírez
Sangyoung Yoon
Ernesto Damiani
H. A. Hamadi
C. Ardagna
Nicola Bena
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
240
4
0
20 Oct 2022
Emerging Threats in Deep Learning-Based Autonomous Driving: A
  Comprehensive Survey
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey
Huiyun Cao
Wenlong Zou
Yinkun Wang
Ting Song
Mengjun Liu
AAML
243
6
0
19 Oct 2022
Transferable Unlearnable Examples
Transferable Unlearnable ExamplesInternational Conference on Learning Representations (ICLR), 2022
Jie Ren
Han Xu
Yuxuan Wan
Jiabo He
Lichao Sun
Shucheng Zhou
191
46
0
18 Oct 2022
Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models
Fine-mixing: Mitigating Backdoors in Fine-tuned Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Zhiyuan Zhang
Lingjuan Lyu
Jiabo He
Chenguang Wang
Xu Sun
AAML
180
58
0
18 Oct 2022
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor
  Attacks in Federated Learning
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Hossein Souri
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
AAMLSILMFedML
222
13
0
17 Oct 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Marksman Backdoor: Backdoor Attacks with Arbitrary Target ClassNeural Information Processing Systems (NeurIPS), 2022
Khoa D. Doan
Yingjie Lao
Ping Li
211
51
0
17 Oct 2022
Dim-Krum: Backdoor-Resistant Federated Learning for NLP with
  Dimension-wise Krum-Based Aggregation
Dim-Krum: Backdoor-Resistant Federated Learning for NLP with Dimension-wise Krum-Based AggregationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Zhiyuan Zhang
Qi Su
Xu Sun
FedML
143
18
0
13 Oct 2022
Few-shot Backdoor Attacks via Neural Tangent Kernels
Few-shot Backdoor Attacks via Neural Tangent KernelsInternational Conference on Learning Representations (ICLR), 2022
J. Hayase
Sewoong Oh
217
22
0
12 Oct 2022
Design of secure and robust cognitive system for malware detection
Design of secure and robust cognitive system for malware detection
Sanket Shukla
AAML
115
3
0
03 Aug 2022
Testing the Robustness of Learned Index Structures
Testing the Robustness of Learned Index Structures
Matthias Bachfischer
Renata Borovica-Gajic
Benjamin I. P. Rubinstein
AAML
89
2
0
23 Jul 2022
Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
Suppressing Poisoning Attacks on Federated Learning for Medical ImagingInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2022
Naif Alkhunaizi
Dmitry Kamzolov
Martin Takávc
Karthik Nandakumar
OOD
126
14
0
15 Jul 2022
Robustness Evaluation of Deep Unsupervised Learning Algorithms for
  Intrusion Detection Systems
Robustness Evaluation of Deep Unsupervised Learning Algorithms for Intrusion Detection Systems
D'Jeff K. Nkashama
Ariana Soltani
Jean-Charles Verdier
Marc Frappier
Pierre-Marting Tardif
F. Kabanza
OODAAML
235
11
0
25 Jun 2022
FLVoogd: Robust And Privacy Preserving Federated Learning
FLVoogd: Robust And Privacy Preserving Federated LearningAsian Conference on Machine Learning (ACML), 2022
Yuhang Tian
Rui Wang
Yan Qiao
E. Panaousis
K. Liang
FedML
242
5
0
24 Jun 2022
Integrity Authentication in Tree Models
Integrity Authentication in Tree ModelsKnowledge Discovery and Data Mining (KDD), 2022
Weijie Zhao
Yingjie Lao
Ping Li
250
5
0
30 May 2022
Circumventing Backdoor Defenses That Are Based on Latent Separability
Circumventing Backdoor Defenses That Are Based on Latent Separability
Xiangyu Qi
Tinghao Xie
Yiming Li
Saeed Mahloujifar
Prateek Mittal
AAML
244
11
0
26 May 2022
Unintended memorisation of unique features in neural networks
Unintended memorisation of unique features in neural networks
J. Hartley
Sotirios A. Tsaftaris
168
1
0
20 May 2022
SafeNet: The Unreasonable Effectiveness of Ensembles in Private
  Collaborative Learning
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning
Harsh Chaudhari
Matthew Jagielski
Alina Oprea
225
7
0
20 May 2022
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
Trustworthy Graph Neural Networks: Aspects, Methods and TrendsProceedings of the IEEE (Proc. IEEE), 2022
He Zhang
Bang Wu
Lizhen Qu
Shirui Pan
Hanghang Tong
Jian Pei
368
149
0
16 May 2022
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary
  Backdoor Pattern Types Using a Maximum Margin Statistic
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin Statistic
Hang Wang
Zhen Xiang
David J. Miller
G. Kesidis
AAML
291
60
0
13 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive LearningUSENIX Security Symposium (USENIX Security), 2022
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
273
41
0
13 May 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against
  Training Data Poisoning
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data PoisoningACM Computing Surveys (ACM CSUR), 2022
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
391
166
0
04 May 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
276
30
0
19 Apr 2022
Machine Learning Security against Data Poisoning: Are We There Yet?
Machine Learning Security against Data Poisoning: Are We There Yet?
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
155
51
0
12 Apr 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their SecretsConference on Computer and Communications Security (CCS), 2022
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
372
134
0
31 Mar 2022
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Yunjie Ge
Qianqian Wang
Jingfeng Zhang
Juntao Zhou
Yunzhu Zhang
Chao Shen
AAML
218
8
0
25 Mar 2022
Automation of reversible steganographic coding with nonlinear discrete
  optimisation
Automation of reversible steganographic coding with nonlinear discrete optimisationConnection science (CS), 2022
Ching-Chun Chang
123
5
0
26 Feb 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Indiscriminate Poisoning Attacks on Unsupervised Contrastive LearningInternational Conference on Learning Representations (ICLR), 2022
Hao He
Kaiwen Zha
Dina Katabi
AAML
330
41
0
22 Feb 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
300
43
0
21 Feb 2022
Attacks, Defenses, And Tools: A Framework To Facilitate Robust AI/ML
  Systems
Attacks, Defenses, And Tools: A Framework To Facilitate Robust AI/ML Systems
Mohamad Fazelnia
I. Khokhlov
Mehdi Mirakhorli
AAML
75
8
0
18 Feb 2022
An Equivalence Between Data Poisoning and Byzantine Gradient Attacks
An Equivalence Between Data Poisoning and Byzantine Gradient AttacksInternational Conference on Machine Learning (ICML), 2022
Sadegh Farhadkhani
R. Guerraoui
L. Hoang
Oscar Villemaud
FedML
186
29
0
17 Feb 2022
Bilevel Optimization with a Lower-level Contraction: Optimal Sample
  Complexity without Warm-start
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-startJournal of machine learning research (JMLR), 2022
Riccardo Grazzi
Massimiliano Pontil
Saverio Salzo
416
16
0
07 Feb 2022
On the predictability in reversible steganography
On the predictability in reversible steganographyTelecommunications Systems (TS), 2022
Ching-Chun Chang
Xu Wang
Sisheng Chen
Hitoshi Kiya
Isao Echizen
71
2
0
05 Feb 2022
Learnability Lock: Authorized Learnability Control Through Adversarial
  Invertible Transformations
Learnability Lock: Authorized Learnability Control Through Adversarial Invertible TransformationsInternational Conference on Learning Representations (ICLR), 2022
Weiqi Peng
Jinghui Chen
AAML
131
5
0
03 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Can Adversarial Training Be Manipulated By Non-Robust Features?Neural Information Processing Systems (NeurIPS), 2022
Lue Tao
Lei Feng
Jianguo Huang
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
710
17
0
31 Jan 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence EstimationConference on Computer and Communications Security (CCS), 2022
Zayd Hammoudeh
Daniel Lowd
TDI
272
37
0
25 Jan 2022
Adversarial Machine Learning Threat Analysis and Remediation in Open
  Radio Access Network (O-RAN)
Adversarial Machine Learning Threat Analysis and Remediation in Open Radio Access Network (O-RAN)Journal of Network and Computer Applications (JNCA), 2022
Edan Habler
Ron Bitton
D. Avraham
D. Mimran
Eitan Klevansky
Oleg Brodt
Heiko Lehmann
Yuval Elovici
A. Shabtai
AAML
270
21
0
16 Jan 2022
Bayesian Neural Networks for Reversible Steganography
Bayesian Neural Networks for Reversible SteganographyIEEE Access (IEEE Access), 2022
Ching-Chun Chang
BDL
189
5
0
07 Jan 2022
Towards Understanding Quality Challenges of the Federated Learning for
  Neural Networks: A First Look from the Lens of Robustness
Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of RobustnessEmpirical Software Engineering (EMSE), 2022
Amin Eslami Abyane
Derui Zhu
Roberto Souza
Lei Ma
Hadi Hemmati
AAMLOODFedML
133
5
0
05 Jan 2022
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Harrison Foley
Liam H. Fowl
Tom Goldstein
Gavin Taylor
AAML
184
10
0
03 Jan 2022
Adversarial Attacks against Windows PE Malware Detection: A Survey of
  the State-of-the-Art
Adversarial Attacks against Windows PE Malware Detection: A Survey of the State-of-the-ArtComputers & security (CS), 2021
Xiang Ling
Lingfei Wu
Jiangyu Zhang
Zhenqing Qu
Wei Deng
...
Chunming Wu
S. Ji
Tianyue Luo
Jingzheng Wu
Yanjun Wu
AAML
508
99
0
23 Dec 2021
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive
  Survey
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
231
13
0
19 Dec 2021
On the Security & Privacy in Federated Learning
On the Security & Privacy in Federated Learning
Gorka Abad
S. Picek
Víctor Julio Ramírez-Durán
A. Urbieta
321
12
0
10 Dec 2021
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value
  Data
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value DataUSENIX Security Symposium (USENIX Security), 2021
Yongji Wu
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
250
47
0
22 Nov 2021
Fooling Adversarial Training with Inducing Noise
Fooling Adversarial Training with Inducing Noise
Zhirui Wang
Yifei Wang
Yisen Wang
120
14
0
19 Nov 2021
Previous
1234567
Next