ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.08689
  4. Cited By
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

29 August 2017
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"

10 / 310 papers shown
Learning to Reweight Examples for Robust Deep Learning
Learning to Reweight Examples for Robust Deep Learning
Mengye Ren
Wenyuan Zeng
Binh Yang
R. Urtasun
OODNoLa
422
1,579
0
24 Mar 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
300
311
0
19 Mar 2018
Label Sanitization against Label Flipping Poisoning Attacks
Label Sanitization against Label Flipping Poisoning Attacks
Andrea Paudice
Luis Muñoz-González
Emil C. Lupu
AAML
199
185
0
02 Mar 2018
Asynchronous Byzantine Machine Learning (the case of SGD)
Asynchronous Byzantine Machine Learning (the case of SGD)
Georgios Damaskinos
El-Mahdi El-Mhamdi
R. Guerraoui
Rhicheek Patra
Mahsa Taziki
FedML
184
41
0
22 Feb 2018
Attack Strength vs. Detectability Dilemma in Adversarial Machine
  Learning
Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning
Christopher Frederickson
Michael Moore
Glenn Dawson
R. Polikar
AAML
104
35
0
20 Feb 2018
Detection of Adversarial Training Examples in Poisoning Attacks through
  Anomaly Detection
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection
Andrea Paudice
Luis Muñoz-González
András Gyorgy
Emil C. Lupu
AAML
175
161
0
08 Feb 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
Basel Alomair
AAMLSILM
775
2,089
0
15 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
368
1,541
0
08 Dec 2017
Adversarial Detection of Flash Malware: Limitations and Open Issues
Adversarial Detection of Flash Malware: Limitations and Open IssuesComputers & security (Comput. Secur.), 2017
Davide Maiorca
Ambra Demontis
Battista Biggio
Maria Elena Chiappe
Giorgio Giacinto
AAML
115
26
0
27 Oct 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
602
2,035
0
22 Aug 2017
Previous
1234567
Page 7 of 7