ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.00741
  4. Cited By
Stronger Data Poisoning Attacks Break Data Sanitization Defenses

Stronger Data Poisoning Attacks Break Data Sanitization Defenses

2 November 2018
Pang Wei Koh
Jacob Steinhardt
Percy Liang
ArXivPDFHTML

Papers citing "Stronger Data Poisoning Attacks Break Data Sanitization Defenses"

44 / 44 papers shown
Title
Traceback of Poisoning Attacks to Retrieval-Augmented Generation
Traceback of Poisoning Attacks to Retrieval-Augmented Generation
Baolei Zhang
Haoran Xin
Minghong Fang
Zhuqing Liu
Biao Yi
Tong Li
Zheli Liu
SILM
AAML
102
0
0
30 Apr 2025
Poison-splat: Computation Cost Attack on 3D Gaussian Splatting
Poison-splat: Computation Cost Attack on 3D Gaussian Splatting
Jiahao Lu
Yifan Zhang
Qiuhong Shen
Xinchao Wang
Shuicheng Yan
3DGS
78
1
0
10 Oct 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
75
12
0
25 Jun 2024
Support Vector Machines under Adversarial Label Contamination
Support Vector Machines under Adversarial Label Contamination
Huang Xiao
Battista Biggio
B. Nelson
Han Xiao
Claudia Eckert
Fabio Roli
AAML
31
231
0
01 Jun 2022
Generative Adversarial Networks
Generative Adversarial Networks
Gilad Cohen
Raja Giryes
GAN
134
30,021
0
01 Mar 2022
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
75
1,080
0
03 Apr 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
57
757
0
01 Apr 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
67
287
0
19 Mar 2018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust
  Deep Learning
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Nicolas Papernot
Patrick McDaniel
OOD
AAML
81
505
0
13 Mar 2018
Sever: A Robust Meta-Algorithm for Stochastic Optimization
Sever: A Robust Meta-Algorithm for Stochastic Optimization
Ilias Diakonikolas
Gautam Kamath
D. Kane
Jerry Li
Jacob Steinhardt
Alistair Stewart
47
289
0
07 Mar 2018
Attack Strength vs. Detectability Dilemma in Adversarial Machine
  Learning
Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning
Christopher Frederickson
Michael Moore
Glenn Dawson
R. Polikar
AAML
39
33
0
20 Feb 2018
Detection of Adversarial Training Examples in Poisoning Attacks through
  Anomaly Detection
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection
Andrea Paudice
Luis Muñoz-González
András Gyorgy
Emil C. Lupu
AAML
26
145
0
08 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
156
3,171
0
01 Feb 2018
Certified Defenses against Adversarial Examples
Certified Defenses against Adversarial Examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
83
967
0
29 Jan 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
78
1,822
0
15 Dec 2017
Provable defenses against adversarial examples via the convex outer
  adversarial polytope
Provable defenses against adversarial examples via the convex outer adversarial polytope
Eric Wong
J. Zico Kolter
AAML
78
1,495
0
02 Nov 2017
Certifying Some Distributional Robustness with Principled Adversarial
  Training
Certifying Some Distributional Robustness with Principled Adversarial Training
Aman Sinha
Hongseok Namkoong
Riccardo Volpi
John C. Duchi
OOD
81
858
0
29 Oct 2017
Security Evaluation of Pattern Classifiers under Attack
Security Evaluation of Pattern Classifiers under Attack
Battista Biggio
Giorgio Fumera
Fabio Roli
AAML
39
442
0
02 Sep 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
85
628
0
29 Aug 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
72
1,754
0
22 Aug 2017
Learning Geometric Concepts with Nasty Noise
Learning Geometric Concepts with Nasty Noise
Ilias Diakonikolas
D. Kane
Alistair Stewart
AAML
50
86
0
05 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
213
11,962
0
19 Jun 2017
Certified Defenses for Data Poisoning Attacks
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
66
751
0
09 Jun 2017
Ensemble Adversarial Training: Attacks and Defenses
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
165
2,712
0
19 May 2017
Resilience: A Criterion for Learning in the Presence of Arbitrary
  Outliers
Resilience: A Criterion for Learning in the Presence of Arbitrary Outliers
Jacob Steinhardt
Moses Charikar
Gregory Valiant
58
138
0
15 Mar 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
134
2,854
0
14 Mar 2017
Generative Poisoning Attack Method Against Neural Networks
Generative Poisoning Attack Method Against Neural Networks
Chaofei Yang
Qing Wu
Hai Helen Li
Yiran Chen
AAML
54
218
0
03 Mar 2017
Being Robust (in High Dimensions) Can Be Practical
Being Robust (in High Dimensions) Can Be Practical
Ilias Diakonikolas
Gautam Kamath
D. Kane
Jerry Li
Ankur Moitra
Alistair Stewart
55
253
0
02 Mar 2017
Towards the Science of Security and Privacy in Machine Learning
Towards the Science of Security and Privacy in Machine Learning
Nicolas Papernot
Patrick McDaniel
Arunesh Sinha
Michael P. Wellman
AAML
54
472
0
11 Nov 2016
Learning from Untrusted Data
Learning from Untrusted Data
Moses Charikar
Jacob Steinhardt
Gregory Valiant
FedML
OOD
67
293
0
07 Nov 2016
Data Poisoning Attacks on Factorization-Based Collaborative Filtering
Data Poisoning Attacks on Factorization-Based Collaborative Filtering
Bo Li
Yining Wang
Aarti Singh
Yevgeniy Vorobeychik
AAML
55
341
0
29 Aug 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
489
5,868
0
08 Jul 2016
Transferability in Machine Learning: from Phenomena to Black-Box Attacks
  using Adversarial Samples
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
SILM
AAML
78
1,735
0
24 May 2016
Agnostic Estimation of Mean and Covariance
Agnostic Estimation of Mean and Covariance
Kevin A. Lai
Anup B. Rao
Santosh Vempala
52
344
0
24 Apr 2016
Robust Estimators in High Dimensions without the Computational
  Intractability
Robust Estimators in High Dimensions without the Computational Intractability
Ilias Diakonikolas
Gautam Kamath
D. Kane
Jingkai Li
Ankur Moitra
Alistair Stewart
61
510
0
21 Apr 2016
Second-Order Stochastic Optimization for Machine Learning in Linear Time
Second-Order Stochastic Optimization for Machine Learning in Linear Time
Naman Agarwal
Brian Bullins
Elad Hazan
ODL
33
102
0
12 Feb 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
44
3,656
0
08 Feb 2016
Tight Bounds for Approximate Carathéodory and Beyond
Tight Bounds for Approximate Carathéodory and Beyond
Vahab Mirrokni
R. Leme
Adrian Vladu
Sam Chiu-wai Wong
34
32
0
29 Dec 2015
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
90
4,878
0
14 Nov 2015
Distillation as a Defense to Adversarial Perturbations against Deep
  Neural Networks
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
45
3,061
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
158
18,922
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
159
14,831
1
21 Dec 2013
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
80
1,580
0
27 Jun 2012
Security Analysis of Online Centroid Anomaly Detection
Security Analysis of Online Centroid Anomaly Detection
Marius Kloft
Pavel Laskov
73
97
0
27 Feb 2010
1