Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.02402
Cited By
Contamination Attacks and Mitigation in Multi-Party Machine Learning
8 January 2019
Jamie Hayes
O. Ohrimenko
AAML
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Contamination Attacks and Mitigation in Multi-Party Machine Learning"
25 / 25 papers shown
Title
Support Vector Machines under Adversarial Label Contamination
Huang Xiao
Battista Biggio
B. Nelson
Han Xiao
Claudia Eckert
Fabio Roli
AAML
36
231
0
01 Jun 2022
Machine Learning with Membership Privacy using Adversarial Regularization
Milad Nasr
Reza Shokri
Amir Houmansadr
FedML
MIACV
35
468
0
16 Jul 2018
An Algorithmic Framework For Differentially Private Data Analysis on Trusted Processors
Joshua Allen
Bolin Ding
Janardhan Kulkarni
Harsha Nori
O. Ohrimenko
Sergey Yekhanin
SyDa
FedML
89
32
0
02 Jul 2018
Is feature selection secure against training data poisoning?
Huang Xiao
Battista Biggio
Gavin Brown
Giorgio Fumera
Claudia Eckert
Fabio Roli
AAML
SILM
41
423
0
21 Apr 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
71
757
0
01 Apr 2018
Understanding Membership Inferences on Well-Generalized Learning Models
Yunhui Long
Vincent Bindschaedler
Lei Wang
Diyue Bu
Xiaofeng Wang
Haixu Tang
Carl A. Gunter
Kai Chen
MIALM
MIACV
29
224
0
13 Feb 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
80
1,822
0
15 Dec 2017
Prochlo: Strong Privacy for Analytics in the Crowd
Andrea Bittau
Ulfar Erlingsson
Petros Maniatis
Ilya Mironov
A. Raghunathan
David Lie
Mitch Rudominer
Ushasree Kode
J. Tinnés
B. Seefeld
106
279
0
02 Oct 2017
Machine Learning Models that Remember Too Much
Congzheng Song
Thomas Ristenpart
Vitaly Shmatikov
VLM
52
511
0
22 Sep 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
72
1,758
0
22 Aug 2017
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
136
2,854
0
14 Mar 2017
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
Briland Hitaj
G. Ateniese
Fernando Perez-Cruz
FedML
107
1,385
0
24 Feb 2017
Learning to Pivot with Adversarial Networks
Gilles Louppe
Michael Kagan
Kyle Cranmer
46
227
0
03 Nov 2016
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLR
MIALM
MIACV
203
4,075
0
18 Oct 2016
Minimax Filter: Learning to Preserve Privacy from Inference Attacks
Jihun Hamm
23
82
0
12 Oct 2016
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
168
8,513
0
16 Aug 2016
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
491
5,878
0
08 Jul 2016
Deep Learning with Differential Privacy
Martín Abadi
Andy Chu
Ian Goodfellow
H. B. McMahan
Ilya Mironov
Kunal Talwar
Li Zhang
FedML
SyDa
162
6,069
0
01 Jul 2016
Learning Privately from Multiparty Data
Jihun Hamm
Yingjun Cao
M. Belkin
FedML
31
165
0
10 Feb 2016
The Limitations of Deep Learning in Adversarial Settings
Nicolas Papernot
Patrick McDaniel
S. Jha
Matt Fredrikson
Z. Berkay Celik
A. Swami
AAML
66
3,947
0
24 Nov 2015
Censoring Representations with an Adversary
Harrison Edwards
Amos Storkey
AAML
FaML
47
504
0
18 Nov 2015
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
95
4,878
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
163
18,922
0
20 Dec 2014
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
554
13,395
0
25 Aug 2014
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
89
1,580
0
27 Jun 2012
1