Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.01992
Cited By
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
4 May 2022
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning"
19 / 19 papers shown
Title
Artificial Intelligence health advice accuracy varies across languages and contexts
Prashant Garg
Thiemo Fetzer
42
0
0
25 Apr 2025
Statistically Testing Training Data for Unwanted Error Patterns using Rule-Oriented Regression
Stefan Rass
Martin Dallinger
38
0
0
24 Mar 2025
Position: A taxonomy for reporting and describing AI security incidents
L. Bieringer
Kevin Paeth
Andreas Wespi
Kathrin Grosse
Alexandre Alahi
Kathrin Grosse
78
0
0
19 Dec 2024
Human-inspired Perspectives: A Survey on AI Long-term Memory
Zihong He
Weizhe Lin
Hao Zheng
Fan Zhang
Matt Jones
Laurence Aitchison
X. Xu
Miao Liu
Per Ola Kristensson
Junxiao Shen
67
2
0
01 Nov 2024
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
23
0
0
01 Oct 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
50
7
0
25 Jun 2024
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAML
SILM
41
3
0
20 Nov 2023
Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Dario Lazzaro
Antonio Emanuele Cinà
Maura Pintor
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
17
6
0
01 Jul 2023
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks
Ziqiang Li
Hong Sun
Pengfei Xia
Beihao Xia
Xue Rui
Wei Zhang
Qinglang Guo
Bin Li
AAML
25
8
0
14 Jun 2023
On the Limitations of Model Stealing with Uncertainty Quantification Models
David Pape
Sina Daubener
Thorsten Eisenhofer
Antonio Emanuele Cinà
Lea Schonherr
9
3
0
09 May 2023
Machine Learning Security in Industry: A Quantitative Survey
Kathrin Grosse
L. Bieringer
Tarek R. Besold
Battista Biggio
Katharina Krombholz
19
31
0
11 Jul 2022
On Collective Robustness of Bagging Against Data Poisoning
Ruoxin Chen
Zenan Li
Jie Li
Chentao Wu
Junchi Yan
40
23
0
26 May 2022
Energy-Latency Attacks via Sponge Poisoning
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
37
27
0
14 Mar 2022
Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters
Javier Carnerero-Cano
Luis Muñoz-González
P. Spencer
Emil C. Lupu
AAML
16
10
0
23 May 2021
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification
Chuanshuai Chen
Jiazhu Dai
SILM
48
126
0
11 Jul 2020
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
176
252
0
06 Mar 2020
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
162
284
0
02 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
174
1,014
0
29 Nov 2018
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Luca Franceschi
P. Frasconi
Saverio Salzo
Riccardo Grazzi
Massimiliano Pontil
96
714
0
13 Jun 2018
1