ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.10544
  4. Cited By
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
v1v2v3v4 (latest)

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
18 December 2020
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
    SILM
ArXiv (abs)PDFHTML

Papers citing "Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses"

50 / 162 papers shown
SteganoBackdoor: Stealthy and Data-Efficient Backdoor Attacks on Language Models
SteganoBackdoor: Stealthy and Data-Efficient Backdoor Attacks on Language Models
Eric Xue
Ruiyi Zhang
Zijun Zhang
AAML
224
0
0
18 Nov 2025
Forgetting to Forget: Attention Sink as A Gateway for Backdooring LLM Unlearning
Forgetting to Forget: Attention Sink as A Gateway for Backdooring LLM Unlearning
Bingqi Shang
Yiwei Chen
Yihua Zhang
Bingquan Shen
Sijia Liu
MUKELMAAML
279
2
0
19 Oct 2025
Backdoor Unlearning by Linear Task Decomposition
Backdoor Unlearning by Linear Task Decomposition
Amel Abdelraheem
Alessandro Favero
Gérôme Bovet
Pascal Frossard
AAMLMU
273
0
0
16 Oct 2025
Injection, Attack and Erasure: Revocable Backdoor Attacks via Machine Unlearning
Injection, Attack and Erasure: Revocable Backdoor Attacks via Machine Unlearning
Baogang Song
Dongdong Zhao
Jianwen Xiang
Qiben Xu
Zizhuo Yu
AAML
129
0
0
15 Oct 2025
Backdoor Vectors: a Task Arithmetic View on Backdoor Attacks and Defenses
Backdoor Vectors: a Task Arithmetic View on Backdoor Attacks and Defenses
Stanisław Pawlak
Jan Dubiñski
Daniel Marczak
Bartłomiej Twardowski
AAMLMoMe
272
0
0
09 Oct 2025
Advancing Security in Software-Defined Vehicles: A Comprehensive Survey and Taxonomy
Advancing Security in Software-Defined Vehicles: A Comprehensive Survey and Taxonomy
Khaoula Sghaier
Badis Hammi
Ghada Gharbi
Pierre Merdrignac
Pierre Parrend
Didier Verna
AAML
135
0
0
08 Oct 2025
On the Fragility of Contribution Score Computation in Federated Learning
On the Fragility of Contribution Score Computation in Federated Learning
Balazs Pejo
Marcell Frank
Krisztian Varga
Peter Veliczky
G. Biczók
FedML
275
1
0
24 Sep 2025
From Firewalls to Frontiers: AI Red-Teaming is a Domain-Specific Evolution of Cyber Red-Teaming
From Firewalls to Frontiers: AI Red-Teaming is a Domain-Specific Evolution of Cyber Red-Teaming
Anusha Sinha
Keltin Grimes
James Lucassen
Michael Feffer
Nathan M. VanHoudnos
Zhiwei Steven Wu
Hoda Heidari
AAML
204
2
0
14 Sep 2025
Not All Samples Are Equal: Quantifying Instance-level Difficulty in Targeted Data Poisoning
Not All Samples Are Equal: Quantifying Instance-level Difficulty in Targeted Data Poisoning
William Xu
Yiwei Lu
Yihan Wang
Matthew Y.R. Yang
Zuoqiu Liu
Gautam Kamath
Yaoliang Yu
211
0
0
08 Sep 2025
Scam2Prompt: A Scalable Framework for Auditing Malicious Scam Endpoints in Production LLMs
Scam2Prompt: A Scalable Framework for Auditing Malicious Scam Endpoints in Production LLMs
Zhiyang Chen
Tara Saba
Xun Deng
X. Si
Fan Long
170
0
0
02 Sep 2025
Superior resilience to poisoning and amenability to unlearning in quantum machine learning
Superior resilience to poisoning and amenability to unlearning in quantum machine learning
Yu-Qin Chen
Shi-Xin Zhang
AAML
203
4
0
04 Aug 2025
Evading Data Provenance in Deep Neural Networks
Evading Data Provenance in Deep Neural Networks
Hongyu Zhu
Sichu Liang
Wenwen Wang
Zhuomeng Zhang
Fangqi Li
Shi-Lin Wang
AAML
321
4
0
01 Aug 2025
ADAPT: A Pseudo-labeling Approach to Combat Concept Drift in Malware Detection
ADAPT: A Pseudo-labeling Approach to Combat Concept Drift in Malware Detection
Md Tanvirul Alam
Aritran Piplai
Nidhi Rastogi
293
2
0
11 Jul 2025
PDLRecover: Privacy-preserving Decentralized Model Recovery with Machine Unlearning
PDLRecover: Privacy-preserving Decentralized Model Recovery with Machine Unlearning
Xiangman Li
Xiaodong Wu
Jianbing Ni
Mohamed Mahmoud
Maazen Alsabaan
AAML
212
0
0
18 Jun 2025
SPBA: Utilizing Speech Large Language Model for Backdoor Attacks on Speech Classification Models
Wenhan Yao
Fen Xiao
Xiarun Chen
Jia Liu
yongqiang He
Weiping Wen
AAMLSILM
181
0
0
10 Jun 2025
Joint-GCG: Unified Gradient-Based Poisoning Attacks on Retrieval-Augmented Generation Systems
Joint-GCG: Unified Gradient-Based Poisoning Attacks on Retrieval-Augmented Generation Systems
Haowei Wang
Rupeng Zhang
Peng Li
Mingyang Li
Yuekai Huang
Dandan Wang
Qing Wang
SILMAAML
291
3
0
06 Jun 2025
Towards Secure MLOps: Surveying Attacks, Mitigation Strategies, and Research Challenges
Towards Secure MLOps: Surveying Attacks, Mitigation Strategies, and Research Challenges
Raj Patel
Himanshu Tripathi
Jasper Stone
Noorbakhsh Amiri Golilarz
Sudip Mittal
Shahram Rahimi
Vini Chaudhary
AAML
308
4
0
30 May 2025
Be Careful When Fine-tuning On Open-Source LLMs: Your Fine-tuning Data Could Be Secretly Stolen!
Be Careful When Fine-tuning On Open-Source LLMs: Your Fine-tuning Data Could Be Secretly Stolen!
Zhexin Zhang
Yuhao Sun
Junxiao Yang
Shiyao Cui
Hongning Wang
Shiyu Huang
Minlie Huang
AAML
396
2
0
21 May 2025
MTL-UE: Learning to Learn Nothing for Multi-Task Learning
MTL-UE: Learning to Learn Nothing for Multi-Task Learning
Yi Yu
Song Xia
Siyuan Yang
Chenqi Kong
Wenhan Yang
Shijian Lu
Yap-Peng Tan
Alex Chichung Kot
423
11
0
08 May 2025
The Role of Open-Source LLMs in Shaping the Future of GeoAI
The Role of Open-Source LLMs in Shaping the Future of GeoAI
Xiao Shi Huang
Zhengzhong Tu
X. Ye
Michael Goodchild
412
4
0
24 Apr 2025
Antidistillation Sampling
Antidistillation Sampling
Yash Savani
Asher Trockman
Zhili Feng
Avi Schwarzschild
Avi Schwarzschild
Alexander Robey
Marc Finzi
J. Zico Kolter
585
11
0
17 Apr 2025
Data Poisoning in Deep Learning: A Survey
Data Poisoning in Deep Learning: A Survey
Pinlong Zhao
Weiyao Zhu
Pengfei Jiao
Di Gao
Ou Wu
AAML
557
23
0
27 Mar 2025
Prototype Guided Backdoor Defense
Prototype Guided Backdoor Defense
Venkat Adithya Amula
Sunayana Samavedam
Saurabh Saini
Avani Gupta
Narayanan P J
AAML
353
1
0
26 Mar 2025
SITA: Structurally Imperceptible and Transferable Adversarial Attacks for Stylized Image Generation
SITA: Structurally Imperceptible and Transferable Adversarial Attacks for Stylized Image GenerationIEEE Transactions on Information Forensics and Security (TIFS), 2025
Jingdan Kang
Haoxin Yang
Yan Cai
Huaidong Zhang
Xuemiao Xu
Yong Du
Shengfeng He
AAML
368
7
0
25 Mar 2025
Detecting and Preventing Data Poisoning Attacks on AI Models
Detecting and Preventing Data Poisoning Attacks on AI Models
Halima Ibrahim Kure
Pradipta Sarkar
Ahmed B. Ndanusa
Augustine O. Nwajana
AAML
196
7
0
12 Mar 2025
BDPFL: Backdoor Defense for Personalized Federated Learning via Explainable Distillation
Chengcheng Zhu
J. Zhang
Di Wu
Guodong Long
AAMLFedML
275
4
0
09 Mar 2025
Class-Conditional Neural Polarizer: A Lightweight and Effective Backdoor Defense by Purifying Poisoned Features
Class-Conditional Neural Polarizer: A Lightweight and Effective Backdoor Defense by Purifying Poisoned Features
Mingli Zhu
Shaokui Wei
Hongyuan Zha
Baoyuan Wu
AAML
357
2
0
23 Feb 2025
Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks
Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks
Ang Li
Yin Zhou
Vethavikashini Chithrra Raghuram
Tom Goldstein
Micah Goldblum
AAML
420
44
0
12 Feb 2025
Adversarial ML Problems Are Getting Harder to Solve and to Evaluate
Adversarial ML Problems Are Getting Harder to Solve and to Evaluate
Javier Rando
Jie Zhang
Nicholas Carlini
F. Tramèr
AAMLELM
465
25
0
04 Feb 2025
Decoding FL Defenses: Systemization, Pitfalls, and Remedies
Decoding FL Defenses: Systemization, Pitfalls, and Remedies
M. A. Khan
Virat Shejwalkar
Yasra Chandio
Amir Houmansadr
Fatima M. Anwar
AAML
272
0
0
03 Feb 2025
Robust and Transferable Backdoor Attacks Against Deep Image Compression
  With Selective Frequency Prior
Robust and Transferable Backdoor Attacks Against Deep Image Compression With Selective Frequency PriorIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024
Yi Yu
Yufei Wang
Wenhan Yang
Lanqing Guo
Shijian Lu
Ling-yu Duan
Yap-Peng Tan
Alex C. Kot
AAML
419
15
0
02 Dec 2024
What You See is Not What You Get: Neural Partial Differential Equations
  and The Illusion of Learning
What You See is Not What You Get: Neural Partial Differential Equations and The Illusion of Learning
Arvind Mohan
Ashesh Chattopadhyay
Jonah Miller
534
4
0
22 Nov 2024
Uncovering, Explaining, and Mitigating the Superficial Safety of
  Backdoor Defense
Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor DefenseNeural Information Processing Systems (NeurIPS), 2024
Rui Min
Zeyu Qin
Nevin L. Zhang
Li Shen
Minhao Cheng
AAML
620
10
0
13 Oct 2024
Mitigating Memorization In Language Models
Mitigating Memorization In Language Models
Mansi Sakarvadia
Aswathy Ajith
Arham Khan
Nathaniel Hudson
Caleb Geniesse
Kyle Chard
Yaoqing Yang
Ian Foster
Michael W. Mahoney
KELMMU
468
11
0
03 Oct 2024
Persistent Backdoor Attacks in Continual Learning
Persistent Backdoor Attacks in Continual Learning
Zhen Guo
Abhinav Kumar
R. Tourani
AAML
407
9
0
20 Sep 2024
Rethinking Backdoor Detection Evaluation for Language Models
Rethinking Backdoor Detection Evaluation for Language Models
Jun Yan
Wenjie Jacky Mo
Xiang Ren
Robin Jia
ELM
375
5
0
31 Aug 2024
Flatness-aware Sequential Learning Generates Resilient Backdoors
Flatness-aware Sequential Learning Generates Resilient Backdoors
Hoang Pham
The-Anh Ta
Anh Tran
Khoa D. Doan
FedMLAAML
266
1
0
20 Jul 2024
Wicked Oddities: Selectively Poisoning for Effective Clean-Label
  Backdoor Attacks
Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks
Quang H. Nguyen
Nguyen Ngoc-Hieu
The-Anh Ta
Thanh Nguyen-Tang
Kok-Seng Wong
Hoang Thanh-Tung
Khoa D. Doan
AAML
384
6
0
15 Jul 2024
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Lukas Gosch
Mahalakshmi Sabanayagam
Debarghya Ghoshdastidar
Stephan Günnemann
AAML
640
6
0
15 Jul 2024
Data Poisoning Attacks in Intelligent Transportation Systems: A Survey
Data Poisoning Attacks in Intelligent Transportation Systems: A Survey
Feilong Wang
Xin Wang
X. Ban
AAML
311
37
0
06 Jul 2024
Distribution Learnability and Robustness
Distribution Learnability and Robustness
Shai Ben-David
Alex Bie
Gautam Kamath
Tosca Lechner
375
5
0
25 Jun 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAMLMU
627
31
0
25 Jun 2024
AI Risk Management Should Incorporate Both Safety and Security
AI Risk Management Should Incorporate Both Safety and Security
Xiangyu Qi
Yangsibo Huang
Yi Zeng
Edoardo Debenedetti
Jonas Geiping
...
Chaowei Xiao
Yue Liu
Dawn Song
Peter Henderson
Prateek Mittal
AAML
316
21
0
29 May 2024
HiddenSpeaker: Generate Imperceptible Unlearnable Audios for Speaker
  Verification System
HiddenSpeaker: Generate Imperceptible Unlearnable Audios for Speaker Verification System
Zhisheng Zhang
Pengyang Huang
AAML
390
6
0
24 May 2024
Nearest is Not Dearest: Towards Practical Defense against
  Quantization-conditioned Backdoor Attacks
Nearest is Not Dearest: Towards Practical Defense against Quantization-conditioned Backdoor Attacks
Boheng Li
Yishuo Cai
Haowei Li
Feng Xue
Zhifeng Li
Yiming Li
MQAAML
296
33
0
21 May 2024
Purify Unlearnable Examples via Rate-Constrained Variational
  Autoencoders
Purify Unlearnable Examples via Rate-Constrained Variational AutoencodersInternational Conference on Machine Learning (ICML), 2024
Yi Yu
Yufei Wang
Song Xia
Wenhan Yang
Shijian Lu
Yap-Peng Tan
A.C. Kot
AAML
322
23
0
02 May 2024
Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable
Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable
Haozhe Liu
Wentian Zhang
Bing Li
Bernard Ghanem
Jürgen Schmidhuber
DiffMWIGMAAML
305
3
0
01 May 2024
Defending against Data Poisoning Attacks in Federated Learning via User
  Elimination
Defending against Data Poisoning Attacks in Federated Learning via User Elimination
Nick Galanis
AAML
255
4
0
19 Apr 2024
Generating Potent Poisons and Backdoors from Scratch with Guided
  Diffusion
Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion
Hossein Souri
Arpit Bansal
Hamid Kazemi
Liam H. Fowl
Aniruddha Saha
Jonas Geiping
Andrew Gordon Wilson
Rama Chellappa
Tom Goldstein
Micah Goldblum
SILMDiffM
238
1
0
25 Mar 2024
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized
  Scaled Prediction Consistency
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency
Soumyadeep Pal
Yuguang Yao
Ren Wang
Bingquan Shen
Sijia Liu
AAML
285
15
0
15 Mar 2024
1234
Next
Page 1 of 4