ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.00308
  4. Cited By
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

1 April 2018
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
    AAML
ArXivPDFHTML

Papers citing "Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning"

27 / 27 papers shown
Title
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach
Huazi Pan
Yanjun Zhang
Leo Yu Zhang
Scott Adams
Abbas Kouzani
Suiyang Khoo
FedML
32
0
0
22 May 2025
Covert Attacks on Machine Learning Training in Passively Secure MPC
Covert Attacks on Machine Learning Training in Passively Secure MPC
Matthew Jagielski
Daniel Escudero
Rahul Rachuri
Peter Scholl
41
0
0
21 May 2025
Traceback of Poisoning Attacks to Retrieval-Augmented Generation
Traceback of Poisoning Attacks to Retrieval-Augmented Generation
Baolei Zhang
Haoran Xin
Minghong Fang
Zhuqing Liu
Biao Yi
Tong Li
Zheli Liu
SILM
AAML
102
0
0
30 Apr 2025
FedSV: Byzantine-Robust Federated Learning via Shapley Value
FedSV: Byzantine-Robust Federated Learning via Shapley Value
Khaoula Otmani
Rachid Elazouzi
Vincent Labatut
FedML
AAML
135
2
0
24 Feb 2025
Poisoning Prevention in Federated Learning and Differential Privacy via Stateful Proofs of Execution
Poisoning Prevention in Federated Learning and Differential Privacy via Stateful Proofs of Execution
Norrathep Rattanavipanon
Ivan de Oliviera Nunes
91
0
0
28 Jan 2025
Poison-splat: Computation Cost Attack on 3D Gaussian Splatting
Poison-splat: Computation Cost Attack on 3D Gaussian Splatting
Jiahao Lu
Yifan Zhang
Qiuhong Shen
Xinchao Wang
Shuicheng Yan
3DGS
78
1
0
10 Oct 2024
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts
Yixin Wu
Ning Yu
Michael Backes
Yun Shen
Yang Zhang
DiffM
76
8
0
25 Oct 2023
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Ali Raza
Shujun Li
K. Tran
L. Koehl
Kim Duc Tran
AAML
65
4
0
18 Jul 2022
Cryptanalytic Extraction of Neural Network Models
Cryptanalytic Extraction of Neural Network Models
Nicholas Carlini
Matthew Jagielski
Ilya Mironov
FedML
MLAU
MIACV
AAML
93
135
0
10 Mar 2020
Is feature selection secure against training data poisoning?
Is feature selection secure against training data poisoning?
Huang Xiao
Battista Biggio
Gavin Brown
Giorgio Fumera
Claudia Eckert
Fabio Roli
AAML
SILM
36
423
0
21 Apr 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
78
1,822
0
15 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
83
1,401
0
08 Dec 2017
Security Evaluation of Pattern Classifiers under Attack
Security Evaluation of Pattern Classifiers under Attack
Battista Biggio
Giorgio Fumera
Fabio Roli
AAML
37
442
0
02 Sep 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
85
628
0
29 Aug 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
67
1,754
0
22 Aug 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
93
2,140
0
21 Aug 2017
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLR
MIALM
MIACV
200
4,075
0
18 Oct 2016
Data Poisoning Attacks on Factorization-Based Collaborative Filtering
Data Poisoning Attacks on Factorization-Based Collaborative Filtering
Bo Li
Yining Wang
Aarti Singh
Yevgeniy Vorobeychik
AAML
53
341
0
29 Aug 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
155
8,497
0
16 Aug 2016
The Limitations of Deep Learning in Adversarial Settings
The Limitations of Deep Learning in Adversarial Settings
Nicolas Papernot
Patrick McDaniel
S. Jha
Matt Fredrikson
Z. Berkay Celik
A. Swami
AAML
52
3,947
0
24 Nov 2015
Distillation as a Defense to Adversarial Perturbations against Deep
  Neural Networks
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
45
3,061
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
151
18,922
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
157
14,831
1
21 Dec 2013
Robust High Dimensional Sparse Regression and Matching Pursuit
Robust High Dimensional Sparse Regression and Matching Pursuit
Yudong Chen
Constantine Caramanis
Shie Mannor
53
20
0
12 Jan 2013
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
77
1,580
0
27 Jun 2012
Security Analysis of Online Centroid Anomaly Detection
Security Analysis of Online Centroid Anomaly Detection
Marius Kloft
Pavel Laskov
71
97
0
27 Feb 2010
Robust Regression and Lasso
Robust Regression and Lasso
Huan Xu
Constantine Caramanis
Shie Mannor
OOD
43
302
0
11 Nov 2008
1