Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1703.01340
Cited By
Generative Poisoning Attack Method Against Neural Networks
3 March 2017
Chaofei Yang
Qing Wu
Hai Helen Li
Yiran Chen
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Generative Poisoning Attack Method Against Neural Networks"
29 / 29 papers shown
Title
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning
Yongyi Su
Yushu Li
Nanqing Liu
Kui Jia
Xulei Yang
Chuan-Sheng Foo
Xun Xu
TTA
AAML
56
0
0
07 Oct 2024
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models
Jiong Wang
Junlin Wu
Muhao Chen
Yevgeniy Vorobeychik
Chaowei Xiao
AAML
21
12
0
16 Nov 2023
Machine Unlearning Methodology base on Stochastic Teacher Network
Xulong Zhang
Jianzong Wang
Ning Cheng
Yifu Sun
Chuanyao Zhang
Jing Xiao
MU
13
4
0
28 Aug 2023
Test-Time Poisoning Attacks Against Test-Time Adaptation Models
Tianshuo Cong
Xinlei He
Yun Shen
Yang Zhang
AAML
TTA
19
5
0
16 Aug 2023
A Blockchain-based Platform for Reliable Inference and Training of Large-Scale Models
Sanghyeon Park
Junmo Lee
Soo-Mook Moon
38
1
0
06 May 2023
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling
Ethan Wisdom
Tejas Gokhale
Chaowei Xiao
Yezhou Yang
18
0
0
30 Mar 2023
The Value of Out-of-Distribution Data
Ashwin De Silva
Rahul Ramesh
Carey E. Priebe
Pratik Chaudhari
Joshua T. Vogelstein
OODD
16
11
0
23 Aug 2022
Integrity Authentication in Tree Models
Weijie Zhao
Yingjie Lao
Ping Li
54
5
0
30 May 2022
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
Shutong Wu
Sizhe Chen
Cihang Xie
X. Huang
AAML
42
26
0
24 May 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
11
37
0
21 Feb 2022
A Survey on Poisoning Attacks Against Supervised Machine Learning
Wenjun Qiu
AAML
20
9
0
05 Feb 2022
Towards Adversarial Evaluations for Inexact Machine Unlearning
Shashwat Goel
Ameya Prabhu
Amartya Sanyal
Ser-Nam Lim
Philip H. S. Torr
Ponnurangam Kumaraguru
AAML
ELM
MU
26
45
0
17 Jan 2022
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
17
86
0
08 May 2021
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
J. Hayase
Weihao Kong
Raghav Somani
Sewoong Oh
AAML
19
149
0
22 Apr 2021
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Antonio Emanuele Cinà
Sebastiano Vascon
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
19
9
0
23 Mar 2021
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
136
190
0
13 Jan 2021
Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers
T. Shapira
David Berend
Ishai Rosenberg
Yang Liu
A. Shabtai
Yuval Elovici
AAML
11
4
0
30 Oct 2020
Bias Field Poses a Threat to DNN-based X-Ray Recognition
Binyu Tian
Qing-Wu Guo
Felix Juefei Xu
W. L. Chan
Yupeng Cheng
Xiaohong Li
Xiaofei Xie
Shengchao Qin
AAML
AI4CE
24
33
0
19 Sep 2020
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
23
637
0
16 Jul 2020
Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces
Chaofei Yang
Lei Ding
Yiran Chen
H. Li
AAML
8
46
0
12 Jun 2020
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang
David J. Miller
Hang Wang
G. Kesidis
AAML
20
9
0
18 Nov 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
22
18
0
27 Sep 2019
On Defending Against Label Flipping Attacks on Malware Detection Systems
R. Taheri
R. Javidan
Mohammad Shojafar
Zahra Pooranian
A. Miri
Mauro Conti
AAML
11
88
0
13 Aug 2019
Poisoning Attacks with Generative Adversarial Nets
Luis Muñoz-González
Bjarne Pfitzner
Matteo Russo
Javier Carnerero-Cano
Emil C. Lupu
AAML
11
63
0
18 Jun 2019
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
Felipe A. Mejia
Paul Gamble
Z. Hampel-Arias
M. Lomnitz
Nina Lopatina
Lucas Tindall
M. Barrios
SILM
13
18
0
15 Jun 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
17
149
0
31 May 2019
A new Backdoor Attack in CNNs by training set corruption without label poisoning
Mauro Barni
Kassem Kallas
B. Tondi
AAML
18
347
0
12 Feb 2019
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
14
282
0
19 Mar 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Bo-wen Li
Kimberly Lu
D. Song
AAML
SILM
13
1,800
0
15 Dec 2017
1