ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.08689
  4. Cited By
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

29 August 2017
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"

50 / 310 papers shown
SecureLearn - An Attack-agnostic Defense for Multiclass Machine Learning Against Data Poisoning Attacks
SecureLearn - An Attack-agnostic Defense for Multiclass Machine Learning Against Data Poisoning Attacks
Anum Paracha
Junaid Arshad
Mohamed Ben Farah
Khalid Ismail
AAML
122
0
0
25 Oct 2025
Provable Watermarking for Data Poisoning Attacks
Provable Watermarking for Data Poisoning Attacks
Yifan Zhu
Lijia Yu
Xiao-Shan Gao
AAML
139
0
0
10 Oct 2025
SketchGuard: Scaling Byzantine-Robust Decentralized Federated Learning via Sketch-Based Screening
SketchGuard: Scaling Byzantine-Robust Decentralized Federated Learning via Sketch-Based Screening
Murtaza Rangwala
Farag Azzedin
Richard O. Sinnott
Rajkumar Buyya
FedMLAAML
124
1
0
09 Oct 2025
StealthAttack: Robust 3D Gaussian Splatting Poisoning via Density-Guided Illusions
StealthAttack: Robust 3D Gaussian Splatting Poisoning via Density-Guided Illusions
Bo-Hsu Ke
You-Zhe Xie
Yu-Lun Liu
Wei-Chen Chiu
AAML3DGS
177
1
0
02 Oct 2025
AntiFLipper: A Secure and Efficient Defense Against Label-Flipping Attacks in Federated Learning
AntiFLipper: A Secure and Efficient Defense Against Label-Flipping Attacks in Federated Learning
Aashnan Rahman
Abid Hasan
Sherajul Arifin
Faisal Haque Bappy
Tahrim Hossain
Tariqul Islam
Abu Raihan Mostofa Kamal
M. Hossain
AAML
152
0
0
26 Sep 2025
Decoding Deception: Understanding Automatic Speech Recognition Vulnerabilities in Evasion and Poisoning Attacks
Decoding Deception: Understanding Automatic Speech Recognition Vulnerabilities in Evasion and Poisoning Attacks
Aravindhan G
Yuvaraj Govindarajulu
Parin Shah
AAML
101
0
0
26 Sep 2025
Not All Samples Are Equal: Quantifying Instance-level Difficulty in Targeted Data Poisoning
Not All Samples Are Equal: Quantifying Instance-level Difficulty in Targeted Data Poisoning
William Xu
Yiwei Lu
Yihan Wang
Matthew Y.R. Yang
Zuoqiu Liu
Gautam Kamath
Yaoliang Yu
140
0
0
08 Sep 2025
Adversarial Robustness in Distributed Quantum Machine Learning
Adversarial Robustness in Distributed Quantum Machine Learning
Pouya Kananian
Hans-Arno Jacobsen
OODAAML
139
0
0
16 Aug 2025
Defending Against Beta Poisoning Attacks in Machine Learning Models
Defending Against Beta Poisoning Attacks in Machine Learning ModelsComputer Science Symposium in Russia (CSR), 2025
Nilufer Gulciftci
M. Emre Gursoy
AAML
141
0
0
02 Aug 2025
Addressing The Devastating Effects Of Single-Task Data Poisoning In Exemplar-Free Continual Learning
Addressing The Devastating Effects Of Single-Task Data Poisoning In Exemplar-Free Continual Learning
Stanisław Pawlak
Bartłomiej Twardowski
Tomasz Trzciñski
Joost van de Weijer
AAMLCLL
162
0
0
05 Jul 2025
Stress-Testing ML Pipelines with Adversarial Data Corruption
Stress-Testing ML Pipelines with Adversarial Data CorruptionProceedings of the VLDB Endowment (PVLDB), 2025
Jiongli Zhu
Geyang Xu
Felipe Lorenzi
Boris Glavic
Babak Salimi
219
0
0
02 Jun 2025
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode ApproachInternational Joint Conference on Artificial Intelligence (IJCAI), 2025
Huazi Pan
Yanjun Zhang
Leo Yu Zhang
Scott Adams
Abbas Kouzani
Suiyang Khoo
FedML
239
0
0
22 May 2025
Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing
Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing
Shuai Li
Jie Zhang
Yuang Qi
Kejiang Chen
Tianwei Zhang
Weinan Zhang
Nenghai Yu
190
0
0
27 Mar 2025
Data Poisoning in Deep Learning: A Survey
Data Poisoning in Deep Learning: A Survey
Pinlong Zhao
Weiyao Zhu
Pengfei Jiao
Di Gao
Ou Wu
AAML
499
15
0
27 Mar 2025
Adversarial Prompt Evaluation: Systematic Benchmarking of Guardrails Against Prompt Input Attacks on LLMs
Adversarial Prompt Evaluation: Systematic Benchmarking of Guardrails Against Prompt Input Attacks on LLMs
Giulio Zizzo
Giandomenico Cornacchia
Kieran Fraser
Muhammad Zaid Hameed
Ambrish Rawat
Beat Buesser
Mark Purcell
Pin-Yu Chen
P. Sattigeri
Kush R. Varshney
AAML
359
10
0
24 Feb 2025
Decoding FL Defenses: Systemization, Pitfalls, and Remedies
Decoding FL Defenses: Systemization, Pitfalls, and Remedies
M. A. Khan
Virat Shejwalkar
Yasra Chandio
Amir Houmansadr
Fatima M. Anwar
AAML
233
0
0
03 Feb 2025
BridgePure: Limited Protection Leakage Can Break Black-Box Data Protection
BridgePure: Limited Protection Leakage Can Break Black-Box Data Protection
Yihan Wang
Yiwei Lu
Xiao-Shan Gao
Gautam Kamath
Yaoliang Yu
334
0
0
30 Dec 2024
Set-Valued Sensitivity Analysis of Deep Neural Networks
Set-Valued Sensitivity Analysis of Deep Neural NetworksAAAI Conference on Artificial Intelligence (AAAI), 2024
Xin Wang
Feiling wang
X. Ban
202
1
0
15 Dec 2024
Defending Against Neural Network Model Inversion Attacks via Data
  Poisoning
Defending Against Neural Network Model Inversion Attacks via Data PoisoningIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2024
Shuai Zhou
Dayong Ye
Tianqing Zhu
Wanlei Zhou
AAML
214
4
0
10 Dec 2024
Adversarial Filtering Based Evasion and Backdoor Attacks to EEG-Based
  Brain-Computer Interfaces
Adversarial Filtering Based Evasion and Backdoor Attacks to EEG-Based Brain-Computer InterfacesInformation Fusion (Inf. Fusion), 2024
Lubin Meng
Xue Jiang
Xiaoqing Chen
Wenzhong Liu
Hanbin Luo
Dongrui Wu
AAML
354
9
0
10 Dec 2024
Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization
Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization
Yuhao He
Jinyu Tian
Xianwei Zheng
Li Dong
Yuanman Li
L. Zhang
AAML
351
0
0
06 Nov 2024
Poison-splat: Computation Cost Attack on 3D Gaussian Splatting
Poison-splat: Computation Cost Attack on 3D Gaussian SplattingInternational Conference on Learning Representations (ICLR), 2024
Jiahao Lu
Yifan Zhang
Qiuhong Shen
Xinchao Wang
Shuicheng Yan
3DGS
530
7
0
10 Oct 2024
Timber! Poisoning Decision Trees
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
317
0
0
01 Oct 2024
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate
  Federated Learning
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated LearningNeural Information Processing Systems (NeurIPS), 2024
M. A. Khan
Yasra Chandio
Fatima M. Anwar
AAML
292
3
0
30 Sep 2024
In-depth Analysis of Privacy Threats in Federated Learning for Medical
  Data
In-depth Analysis of Privacy Threats in Federated Learning for Medical Data
B. Das
M. H. Amini
Yanzhao Wu
166
1
0
27 Sep 2024
Understanding Data Importance in Machine Learning Attacks: Does Valuable
  Data Pose Greater Harm?
Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm?Network and Distributed System Security Symposium (NDSS), 2024
Rui Wen
Michael Backes
Yang Zhang
TDIAAML
242
5
0
05 Sep 2024
Transfer-based Adversarial Poisoning Attacks for Online (MIMO-)Deep
  Receviers
Transfer-based Adversarial Poisoning Attacks for Online (MIMO-)Deep Receviers
Kunze Wu
Weiheng Jiang
Dusit Niyato
Yinghuan Li
Chuang Luo
AAML
413
0
0
04 Sep 2024
Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive
  Sparsified Model Aggregation
Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model AggregationIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2024
Jiahao Xu
Zikai Zhang
Rui Hu
245
10
0
02 Sep 2024
Improving SAM Requires Rethinking its Optimization Formulation
Improving SAM Requires Rethinking its Optimization Formulation
Wanyun Xie
Fabian Latorre
Kimon Antonakopoulos
Thomas Pethick
Volkan Cevher
315
3
0
17 Jul 2024
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Lukas Gosch
Mahalakshmi Sabanayagam
Debarghya Ghoshdastidar
Stephan Günnemann
AAML
549
6
0
15 Jul 2024
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Shihua Sun
Shridatt Sugrim
Angelos Stavrou
Haining Wang
AAML
402
8
0
13 Jul 2024
Deep Learning for Network Anomaly Detection under Data Contamination:
  Evaluating Robustness and Mitigating Performance Degradation
Deep Learning for Network Anomaly Detection under Data Contamination: Evaluating Robustness and Mitigating Performance Degradation
D'Jeff K. Nkashama
Jordan Masakuna Félicien
Arian Soltani
Jean-Charles Verdier
Pierre Martin Tardif
Marc Frappier
F. Kabanza
AAML
234
3
0
11 Jul 2024
A Comprehensive Survey on the Security of Smart Grid: Challenges,
  Mitigations, and Future Research Opportunities
A Comprehensive Survey on the Security of Smart Grid: Challenges, Mitigations, and Future Research Opportunities
Arastoo Zibaeirad
Farnoosh Koleini
Shengping Bi
Tao Hou
Tao Wang
AAML
230
35
0
10 Jul 2024
DeepiSign-G: Generic Watermark to Stamp Hidden DNN Parameters for
  Self-contained Tracking
DeepiSign-G: Generic Watermark to Stamp Hidden DNN Parameters for Self-contained Tracking
A. Abuadbba
Nicholas Rhodes
Kristen Moore
Bushra Sabir
Shuo Wang
Yansong Gao
AAML
333
3
0
01 Jul 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAMLMU
531
28
0
25 Jun 2024
ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse
  Diffusion Purification
ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification
Xianlong Wang
Shengshan Hu
Yechao Zhang
Ziqi Zhou
Leo Yu Zhang
Peng Xu
Wei Wan
Hai Jin
AAML
303
5
0
21 Jun 2024
Byzantine-Robust Decentralized Federated Learning
Byzantine-Robust Decentralized Federated LearningConference on Computer and Communications Security (CCS), 2024
Minghong Fang
Zifan Zhang
Hairi
Prashant Khanduri
Jia Liu
Songtao Lu
Yuchen Liu
Neil Zhenqiang Gong
AAMLFedMLOOD
336
55
0
14 Jun 2024
Certified Robustness to Data Poisoning in Gradient-Based Training
Certified Robustness to Data Poisoning in Gradient-Based Training
Philip Sosnin
Mark N. Müller
Maximilian Baader
Calvin Tsay
Matthew Wicker
AAMLSILM
270
15
0
09 Jun 2024
AI Risk Management Should Incorporate Both Safety and Security
AI Risk Management Should Incorporate Both Safety and Security
Xiangyu Qi
Yangsibo Huang
Yi Zeng
Edoardo Debenedetti
Jonas Geiping
...
Chaowei Xiao
Yue Liu
Dawn Song
Peter Henderson
Prateek Mittal
AAML
271
20
0
29 May 2024
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated
  AI-enabled Critical Infrastructure
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated AI-enabled Critical Infrastructure
Zehang Deng
Ruoxi Sun
Minhui Xue
Sheng Wen
S. Çamtepe
Surya Nepal
Yang Xiang
196
11
0
24 May 2024
Poisoning Attacks on Federated Learning for Autonomous Driving
Poisoning Attacks on Federated Learning for Autonomous Driving
Sonakshi Garg
Hugo Jönsson
Gustav Kalander
Axel Nilsson
Bhhaanu Pirange
Viktor Valadi
Johan Östman
AAML
229
2
0
02 May 2024
FCert: Certifiably Robust Few-Shot Classification in the Era of
  Foundation Models
FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models
Yanting Wang
Wei Zou
Jinyuan Jia
257
3
0
12 Apr 2024
Disguised Copyright Infringement of Latent Diffusion Models
Disguised Copyright Infringement of Latent Diffusion ModelsInternational Conference on Machine Learning (ICML), 2024
Yiwei Lu
Matthew Y.R. Yang
Zuoqiu Liu
Gautam Kamath
Yaoliang Yu
WIGM
459
9
0
10 Apr 2024
Backdoor Attack on Multilingual Machine Translation
Backdoor Attack on Multilingual Machine TranslationNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Jun Wang
Xingliang Yuan
Xuanli He
Benjamin I. P. Rubinstein
Trevor Cohn
175
10
0
03 Apr 2024
Generating Potent Poisons and Backdoors from Scratch with Guided
  Diffusion
Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion
Hossein Souri
Arpit Bansal
Hamid Kazemi
Liam H. Fowl
Aniruddha Saha
Jonas Geiping
Andrew Gordon Wilson
Rama Chellappa
Tom Goldstein
Micah Goldblum
SILMDiffM
191
1
0
25 Mar 2024
Nonsmooth Implicit Differentiation: Deterministic and Stochastic
  Convergence Rates
Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence RatesInternational Conference on Machine Learning (ICML), 2024
Riccardo Grazzi
Massimiliano Pontil
Saverio Salzo
336
2
0
18 Mar 2024
Interactive Trimming against Evasive Online Data Manipulation Attacks: A
  Game-Theoretic Approach
Interactive Trimming against Evasive Online Data Manipulation Attacks: A Game-Theoretic ApproachIEEE International Conference on Data Engineering (ICDE), 2024
Yue Fu
Qingqing Ye
Rong Du
Haibo Hu
AAML
133
1
0
15 Mar 2024
Medical Unlearnable Examples: Securing Medical Data from Unauthorized
  Training via Sparsity-Aware Local Masking
Medical Unlearnable Examples: Securing Medical Data from Unauthorized Training via Sparsity-Aware Local Masking
Weixiang Sun
Yixin Liu
Zhiling Yan
Kaidi Xu
Lichao Sun
AAML
358
4
0
15 Mar 2024
Asset-centric Threat Modeling for AI-based Systems
Asset-centric Threat Modeling for AI-based SystemsComputer Science Symposium in Russia (CSSR), 2024
Jan von der Assen
Jamo Sharif
Chao Feng
Christian Killer
Gérome Bovet
Burkhard Stiller
143
6
0
11 Mar 2024
Federated Learning Under Attack: Exposing Vulnerabilities through Data
  Poisoning Attacks in Computer Networks
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks
Ehsan Nowroozi
Imran Haider
R. Taheri
Mauro Conti
AAML
225
26
0
05 Mar 2024
1234567
Next