Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1708.08689
Cited By
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
29 August 2017
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"
50 / 310 papers shown
Teach LLMs to Phish: Stealing Private Information from Language Models
Ashwinee Panda
Christopher A. Choquette-Choo
Zhengming Zhang
Yaoqing Yang
Prateek Mittal
PILM
285
36
0
01 Mar 2024
Exploring Privacy and Fairness Risks in Sharing Diffusion Models: An Adversarial Perspective
Xinjian Luo
Yangfan Jiang
Fei Wei
Yuncheng Wu
Xiaokui Xiao
Beng Chin Ooi
DiffM
334
9
0
28 Feb 2024
Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness
David Fernández Llorca
Ronan Hamon
Henrik Junklewitz
Kathrin Grosse
Lars Kunze
...
Nick Reed
Alexandre Alahi
Emilia Gómez
Ignacio E. Sánchez
Á. Kriston
287
12
0
21 Feb 2024
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
Yiwei Lu
Matthew Y.R. Yang
Gautam Kamath
Yaoliang Yu
AAML
SILM
318
10
0
20 Feb 2024
Unlearnable Examples For Time Series
Yujing Jiang
Jiabo He
S. Erfani
James Bailey
AI4TS
249
3
0
03 Feb 2024
Preference Poisoning Attacks on Reward Model Learning
Junlin Wu
Zhenghao Hu
Chaowei Xiao
Chenguang Wang
Ning Zhang
Yevgeniy Vorobeychik
AAML
277
11
0
02 Feb 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
360
44
0
02 Feb 2024
Logit Poisoning Attack in Distillation-based Federated Learning and its Countermeasures
Yonghao Yu
Shunan Zhu
Jinglu Hu
AAML
FedML
251
2
0
31 Jan 2024
Attacking Byzantine Robust Aggregation in High Dimensions
Sarthak Choudhary
Aashish Kolluri
Prateek Saxena
AAML
231
3
0
22 Dec 2023
Detection and Defense of Unlearnable Examples
AAAI Conference on Artificial Intelligence (AAAI), 2023
Yifan Zhu
Lijia Yu
Xiao-Shan Gao
AAML
179
10
0
14 Dec 2023
Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models
Sze Jue Yang
Chinh D. La
Quang H. Nguyen
Kok-Seng Wong
Anh Tran
Chee Seng Chan
Khoa D. Doan
AAML
239
1
0
06 Dec 2023
Mendata: A Framework to Purify Manipulated Training Data
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
247
0
0
03 Dec 2023
Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise
AAAI Conference on Artificial Intelligence (AAAI), 2023
Yixin Liu
Kaidi Xu
Xun Chen
Lichao Sun
246
15
0
22 Nov 2023
BrainWash: A Poisoning Attack to Forget in Continual Learning
Computer Vision and Pattern Recognition (CVPR), 2023
Ali Abbasi
Parsa Nooralinejad
Hamed Pirsiavash
Soheil Kolouri
CLL
KELM
AAML
376
7
0
20 Nov 2023
PACOL: Poisoning Attacks Against Continual Learners
Huayu Li
G. Ditzler
AAML
170
3
0
18 Nov 2023
AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
IEEE Transactions on Information Forensics and Security (IEEE TIFS), 2023
Zirui Gong
Liyue Shen
Yanjun Zhang
Leo Yu Zhang
Jingwei Wang
Guangdong Bai
Yong Xiang
AAML
239
12
0
13 Nov 2023
Poison is Not Traceless: Fully-Agnostic Detection of Poisoning Attacks
Xinglong Chang
Katharina Dost
Gill Dobbie
Jörg Simon Wicker
AAML
195
1
0
24 Oct 2023
Fast Adversarial Label-Flipping Attack on Tabular Data
Xinglong Chang
Gill Dobbie
Jörg Simon Wicker
AAML
100
3
0
16 Oct 2023
RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks
Neural Information Processing Systems (NeurIPS), 2023
Haonan Yan
Wenjing Zhang
Qian Chen
Xiaoguang Li
Wenhai Sun
Hui Li
Xiao-La Lin
AAML
131
14
0
09 Oct 2023
Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally
IEEE Transactions on Artificial Intelligence (IEEE TAI), 2023
Shawqi Al-Maliki
Adnan Qayyum
Hassan Ali
M. Abdallah
Junaid Qadir
D. Hoang
Dusit Niyato
Ala I. Al-Fuqaha
AAML
354
7
0
05 Oct 2023
Shielding the Unseen: Privacy Protection through Poisoning NeRF with Spatial Deformation
Yihan Wu
Brandon Y. Feng
Heng-Chiao Huang
129
4
0
04 Oct 2023
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks
Industrial Conference on Data Mining (IDM), 2023
Minh-Hao Van
Alycia N. Carey
Xintao Wu
TDI
AAML
228
3
0
15 Sep 2023
Client-side Gradient Inversion Against Federated Learning from Poisoning
Jiaheng Wei
Yanjun Zhang
Leo Yu Zhang
Chao Chen
Shirui Pan
Kok-Leong Ong
Jinchao Zhang
Yang Xiang
AAML
162
5
0
14 Sep 2023
Learning from Limited Heterogeneous Training Data: Meta-Learning for Unsupervised Zero-Day Web Attack Detection across Web Domains
Conference on Computer and Communications Security (CCS), 2023
Peiyang Li
Ye Wang
Qi Li
Zhuotao Liu
Ke Xu
Ju Ren
Zhiying Liu
Ruilin Lin
AAML
184
12
0
07 Sep 2023
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack
Sze Jue Yang
Q. Nguyen
Chee Seng Chan
Khoa D. Doan
AAML
DiffM
363
0
0
31 Aug 2023
Test-Time Poisoning Attacks Against Test-Time Adaptation Models
IEEE Symposium on Security and Privacy (IEEE S&P), 2023
Tianshuo Cong
Xinlei He
Yun Shen
Yang Zhang
AAML
TTA
180
10
0
16 Aug 2023
Physical Adversarial Attacks For Camera-based Smart Systems: Current Trends, Categorization, Applications, Research Challenges, and Future Outlook
IEEE Access (IEEE Access), 2023
Amira Guesmi
Muhammad Abdullah Hanif
B. Ouni
Muhammed Shafique
AAML
323
39
0
11 Aug 2023
An Introduction to Bi-level Optimization: Foundations and Applications in Signal Processing and Machine Learning
IEEE Signal Processing Magazine (IEEE Signal Process. Mag.), 2023
Yihua Zhang
Prashant Khanduri
Ioannis C. Tsaknakis
Yuguang Yao
Min-Fong Hong
Sijia Liu
AI4CE
373
48
0
01 Aug 2023
Co(ve)rtex: ML Models as storage channels and their (mis-)applications
Md Abdullah Al Mamun
Quazi Mishkatul Alam
Erfan Shayegani
Pedram Zaree
Ihsen Alouani
Nael B. Abu-Ghazaleh
284
0
0
17 Jul 2023
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks
Wenxiao Wang
Soheil Feizi
AAML
199
1
0
28 Jun 2023
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization
IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023
Javier Carnerero-Cano
Luis Muñoz-González
P. Spencer
Emil C. Lupu
AAML
231
6
0
02 Jun 2023
Sharpness-Aware Data Poisoning Attack
International Conference on Learning Representations (ICLR), 2023
Pengfei He
Han Xu
Jie Ren
Yingqian Cui
Hui Liu
Charu C. Aggarwal
Shucheng Zhou
AAML
416
8
0
24 May 2023
Attacks on Online Learners: a Teacher-Student Analysis
Neural Information Processing Systems (NeurIPS), 2023
R. Margiotta
Sebastian Goldt
G. Sanguinetti
AAML
255
1
0
18 May 2023
Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples
ACM Multimedia (ACM MM), 2023
Wanzhu Jiang
Yunfeng Diao
He Wang
Jianxin Sun
Ming Wang
Richang Hong
371
21
0
16 May 2023
Decision-based iterative fragile watermarking for model integrity verification
Z. Yin
Heng Yin
Hang Su
Xinpeng Zhang
Zhenzhe Gao
AAML
260
6
0
13 May 2023
Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Annual Meeting of the Association for Computational Linguistics (ACL), 2023
Zhiyuan Zhang
Deli Chen
Hao Zhou
Fandong Meng
Jie Zhou
Xu Sun
186
9
0
08 May 2023
Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks
Jingfeng Zhang
Bo Song
Bo Han
Lei Liu
Gang Niu
Masashi Sugiyama
AAML
166
2
0
30 Apr 2023
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling
Ethan Wisdom
Tejas Gokhale
Chaowei Xiao
Yezhou Yang
148
0
0
30 Mar 2023
The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models
H. M. Dolatabadi
S. Erfani
C. Leckie
DiffM
309
21
0
15 Mar 2023
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks
International Conference on Machine Learning (ICML), 2023
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
227
24
0
07 Mar 2023
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms
Christian Westbrook
S. Pasricha
AAML
147
3
0
03 Mar 2023
Poisoning Web-Scale Training Datasets is Practical
IEEE Symposium on Security and Privacy (IEEE S&P), 2023
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Seth Neel
Kurt Thomas
Florian Tramèr
SILM
377
270
0
20 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
208
21
0
14 Feb 2023
Temporal Robustness against Data Poisoning
Neural Information Processing Systems (NeurIPS), 2023
Wenxiao Wang
Soheil Feizi
AAML
OOD
361
15
0
07 Feb 2023
Enhancement attacks in biomedical machine learning
M. Rosenblatt
J. Dadashkarimi
D. Scheinost
AAML
131
5
0
05 Jan 2023
Analysis of Label-Flip Poisoning Attack on Machine Learning Based Malware Detector
Kshitiz Aryal
Maanak Gupta
Mahmoud Abdelsalam
AAML
151
26
0
03 Jan 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
184
2
0
28 Dec 2022
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Neural Information Processing Systems (NeurIPS), 2022
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
196
58
0
21 Dec 2022
A Review of Speech-centric Trustworthy Machine Learning: Privacy, Safety, and Fairness
APSIPA Transactions on Signal and Information Processing (TASIP), 2022
Tiantian Feng
Rajat Hebbar
Nicholas Mehlman
Xuan Shi
Aditya Kommineni
and Shrikanth Narayanan
266
37
0
18 Dec 2022
Mitigating Adversarial Gray-Box Attacks Against Phishing Detectors
IEEE Transactions on Dependable and Secure Computing (TDSC), 2022
Giovanni Apruzzese
V. S. Subrahmanian
AAML
161
28
0
11 Dec 2022
Previous
1
2
3
4
5
6
7
Next