ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.11571
  4. Cited By
Anti-Backdoor Learning: Training Clean Models on Poisoned Data

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

22 October 2021
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Bo-wen Li
Xingjun Ma
    OnRL
ArXivPDFHTML

Papers citing "Anti-Backdoor Learning: Training Clean Models on Poisoned Data"

50 / 206 papers shown
Title
Potion: Towards Poison Unlearning
Potion: Towards Poison Unlearning
Stefan Schoepf
Jack Foster
Alexandra Brintrup
AAML
MU
47
7
0
13 Jun 2024
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against
  Personalized Federated Learning
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning
Xiaoting Lyu
Yufei Han
Wei Wang
Jingkai Liu
Yongsheng Zhu
Guangquan Xu
Jiqiang Liu
Xiangliang Zhang
AAML
FedML
51
6
0
10 Jun 2024
PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection
PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection
Wei Li
Pin-Yu Chen
Sijia Liu
Ren Wang
AAML
43
3
0
09 Jun 2024
Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning
  Weight Changes and Backdoor Activeness
Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning Weight Changes and Backdoor Activeness
Weilin Lin
Li Liu
Shaokui Wei
Jianze Li
Hui Xiong
AAML
45
2
0
30 May 2024
BAN: Detecting Backdoors Activated by Adversarial Neuron Noise
BAN: Detecting Backdoors Activated by Adversarial Neuron Noise
Xiaoyun Xu
Zhuoran Liu
Stefanos Koffas
Shujian Yu
S. Picek
AAML
32
1
0
30 May 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of
  Energy-Based Models
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
PureGen: Universal Data Purification for Train-Time Poison Defense via
  Generative Model Dynamics
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
Sunay Bhat
Jeffrey Q. Jiang
Omead Brandon Pooladzandi
Alexander Branch
Gregory Pottie
AAML
32
2
0
28 May 2024
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Zhenxing Niu
Yuyao Sun
Qiguang Miao
Rong Jin
Gang Hua
AAML
38
6
0
28 May 2024
Magnitude-based Neuron Pruning for Backdoor Defens
Magnitude-based Neuron Pruning for Backdoor Defens
Nan Li
Haoyu Jiang
Ping Yi
AAML
19
1
0
28 May 2024
Rethinking Pruning for Backdoor Mitigation: An Optimization Perspective
Rethinking Pruning for Backdoor Mitigation: An Optimization Perspective
Nan Li
Haiyang Yu
Ping Yi
AAML
28
0
0
28 May 2024
Partial train and isolate, mitigate backdoor attack
Partial train and isolate, mitigate backdoor attack
Yong Li
Han Gao
AAML
29
0
0
26 May 2024
Breaking the False Sense of Security in Backdoor Defense through
  Re-Activation Attack
Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack
Mingli Zhu
Siyuan Liang
Baoyuan Wu
AAML
42
14
0
25 May 2024
Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor
Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor
Shaokui Wei
Hongyuan Zha
Baoyuan Wu
AAML
53
3
0
25 May 2024
Unified Neural Backdoor Removal with Only Few Clean Samples through
  Unlearning and Relearning
Unified Neural Backdoor Removal with Only Few Clean Samples through Unlearning and Relearning
Nay Myat Min
Long H. Pham
Jun Sun
MU
AAML
32
0
0
23 May 2024
SEEP: Training Dynamics Grounds Latent Representation Search for
  Mitigating Backdoor Poisoning Attacks
SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks
Xuanli He
Qiongkai Xu
Jun Wang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
42
4
0
19 May 2024
Not All Prompts Are Secure: A Switchable Backdoor Attack Against
  Pre-trained Vision Transformers
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers
Shengyuan Yang
Jiawang Bai
Kuofeng Gao
Yong-Liang Yang
Yiming Li
Shu-Tao Xia
AAML
SILM
35
5
0
17 May 2024
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling
  Consistency
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency
Linshan Hou
Ruili Feng
Zhongyun Hua
Wei Luo
Leo Yu Zhang
Yiming Li
AAML
43
19
0
16 May 2024
The Victim and The Beneficiary: Exploiting a Poisoned Model to Train a
  Clean Model on Poisoned Data
The Victim and The Beneficiary: Exploiting a Poisoned Model to Train a Clean Model on Poisoned Data
Zixuan Zhu
Rui Wang
Cong Zou
Lihua Jing
AAML
FedML
31
3
0
17 Apr 2024
UFID: A Unified Framework for Input-level Backdoor Detection on Diffusion Models
UFID: A Unified Framework for Input-level Backdoor Detection on Diffusion Models
Zihan Guan
Mengxuan Hu
Sheng R. Li
Anil Vullikanti
DiffM
AAML
39
3
0
01 Apr 2024
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Guangyu Shen
Shengwei An
Shiwei Feng
Xiangzhe Xu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
AAML
32
3
0
25 Mar 2024
Generating Potent Poisons and Backdoors from Scratch with Guided
  Diffusion
Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion
Hossein Souri
Arpit Bansal
Hamid Kazemi
Liam H. Fowl
Aniruddha Saha
Jonas Geiping
Andrew Gordon Wilson
Rama Chellappa
Tom Goldstein
Micah Goldblum
SILM
DiffM
21
1
0
25 Mar 2024
Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal
  Contrastive Learning via Local Token Unlearning
Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning
Siyuan Liang
Kuanrong Liu
Jiajun Gong
Jiawei Liang
Yuan Xun
Ee-Chien Chang
Xiaochun Cao
AAML
MU
29
13
0
24 Mar 2024
Have You Poisoned My Data? Defending Neural Networks against Data
  Poisoning
Have You Poisoned My Data? Defending Neural Networks against Data Poisoning
Fabio De Gaspari
Dorjan Hitaj
Luigi V. Mancini
AAML
TDI
19
4
0
20 Mar 2024
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized
  Scaled Prediction Consistency
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency
Soumyadeep Pal
Yuguang Yao
Ren Wang
Bingquan Shen
Sijia Liu
AAML
36
8
0
15 Mar 2024
Backdoor Attack with Mode Mixture Latent Modification
Backdoor Attack with Mode Mixture Latent Modification
Hongwei Zhang
Xiaoyin Xu
Dongsheng An
Xianfeng Gu
Min Zhang
AAML
21
0
0
12 Mar 2024
A general approach to enhance the survivability of backdoor attacks by
  decision path coupling
A general approach to enhance the survivability of backdoor attacks by decision path coupling
Yufei Zhao
Dingji Wang
Bihuan Chen
Ziqian Chen
Xin Peng
AAML
19
0
0
05 Mar 2024
Here's a Free Lunch: Sanitizing Backdoored Models with Model Merge
Here's a Free Lunch: Sanitizing Backdoored Models with Model Merge
Ansh Arora
Xuanli He
Maximilian Mozes
Srinibas Swain
Mark Dras
Qiongkai Xu
SILM
MoMe
AAML
58
12
0
29 Feb 2024
Model Pairing Using Embedding Translation for Backdoor Attack Detection
  on Open-Set Classification Tasks
Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks
A. Unnervik
Hatef Otroshi-Shahreza
Anjith George
S´ebastien Marcel
AAML
SILM
32
0
0
28 Feb 2024
Model X-ray:Detect Backdoored Models via Decision Boundary
Model X-ray:Detect Backdoored Models via Decision Boundary
Yanghao Su
Jie Zhang
Ting Xu
Tianwei Zhang
Weiming Zhang
Neng H. Yu
AAML
47
1
0
27 Feb 2024
On the (In)feasibility of ML Backdoor Detection as an Hypothesis Testing
  Problem
On the (In)feasibility of ML Backdoor Detection as an Hypothesis Testing Problem
Georg Pichler
Marco Romanelli
Divya Prakash Manivannan
P. Krishnamurthy
Farshad Khorrami
Siddharth Garg
25
2
0
26 Feb 2024
Acquiring Clean Language Models from Backdoor Poisoned Datasets by
  Downscaling Frequency Space
Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space
Zongru Wu
Zhuosheng Zhang
Pengzhou Cheng
Gongshen Liu
AAML
44
4
0
19 Feb 2024
Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
  Detection
Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection
Jiawei Liang
Siyuan Liang
Aishan Liu
Xiaojun Jia
Junhao Kuang
Xiaochun Cao
AAML
24
20
0
18 Feb 2024
Instruction Backdoor Attacks Against Customized LLMs
Instruction Backdoor Attacks Against Customized LLMs
Rui Zhang
Hongwei Li
Rui Wen
Wenbo Jiang
Yuan Zhang
Michael Backes
Yun Shen
Yang Zhang
AAML
SILM
30
21
0
14 Feb 2024
Rethinking Machine Unlearning for Large Language Models
Rethinking Machine Unlearning for Large Language Models
Sijia Liu
Yuanshun Yao
Jinghan Jia
Stephen Casper
Nathalie Baracaldo
...
Hang Li
Kush R. Varshney
Mohit Bansal
Sanmi Koyejo
Yang Liu
AILaw
MU
70
81
0
13 Feb 2024
Test-Time Backdoor Attacks on Multimodal Large Language Models
Test-Time Backdoor Attacks on Multimodal Large Language Models
Dong Lu
Tianyu Pang
Chao Du
Qian Liu
Xianjun Yang
Min-Bin Lin
AAML
53
21
0
13 Feb 2024
Preference Poisoning Attacks on Reward Model Learning
Preference Poisoning Attacks on Reward Model Learning
Junlin Wu
Jiong Wang
Chaowei Xiao
Chenguang Wang
Ning Zhang
Yevgeniy Vorobeychik
AAML
24
5
0
02 Feb 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
25
16
0
02 Feb 2024
Multi-Trigger Backdoor Attacks: More Triggers, More Threats
Multi-Trigger Backdoor Attacks: More Triggers, More Threats
Yige Li
Xingjun Ma
Jiabo He
Hanxun Huang
Yu-Gang Jiang
AAML
28
5
0
27 Jan 2024
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor
  Learning
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning
Baoyuan Wu
Hongrui Chen
Mingda Zhang
Zihao Zhu
Shaokui Wei
Danni Yuan
Mingli Zhu
Ruotong Wang
Li Liu
Chaoxiao Shen
AAML
ELM
72
9
0
26 Jan 2024
WPDA: Frequency-based Backdoor Attack with Wavelet Packet Decomposition
WPDA: Frequency-based Backdoor Attack with Wavelet Packet Decomposition
Zhengyao Song
Yongqiang Li
Danni Yuan
Li Liu
Shaokui Wei
Baoyuan Wu
AAML
32
4
0
24 Jan 2024
End-to-End Anti-Backdoor Learning on Images and Time Series
End-to-End Anti-Backdoor Learning on Images and Time Series
Yujing Jiang
Xingjun Ma
S. Erfani
Yige Li
James Bailey
40
1
0
06 Jan 2024
Progressive Poisoned Data Isolation for Training-time Backdoor Defense
Progressive Poisoned Data Isolation for Training-time Backdoor Defense
Yiming Chen
Haiwei Wu
Jiantao Zhou
AAML
32
9
0
20 Dec 2023
DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via
  Diffusion Models
DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models
Jiachen Zhou
Peizhuo Lv
Yibing Lan
Guozhu Meng
Kai Chen
Hualong Ma
AAML
21
7
0
18 Dec 2023
A Comprehensive Survey of Attack Techniques, Implementation, and
  Mitigation Strategies in Large Language Models
A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models
Aysan Esmradi
Daniel Wankit Yip
C. Chan
AAML
32
11
0
18 Dec 2023
UltraClean: A Simple Framework to Train Robust Neural Networks against
  Backdoor Attacks
UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
Bingyin Zhao
Yingjie Lao
AAML
22
1
0
17 Dec 2023
On the Difficulty of Defending Contrastive Learning against Backdoor
  Attacks
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Changjiang Li
Ren Pang
Bochuan Cao
Zhaohan Xi
Jinghui Chen
Shouling Ji
Ting Wang
AAML
36
6
0
14 Dec 2023
Data and Model Poisoning Backdoor Attacks on Wireless Federated
  Learning, and the Defense Mechanisms: A Comprehensive Survey
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
Yichen Wan
Youyang Qu
Wei Ni
Yong Xiang
Longxiang Gao
Ekram Hossain
AAML
47
33
0
14 Dec 2023
Defenses in Adversarial Machine Learning: A Survey
Defenses in Adversarial Machine Learning: A Survey
Baoyuan Wu
Shaokui Wei
Mingli Zhu
Meixi Zheng
Zihao Zhu
Mingda Zhang
Hongrui Chen
Danni Yuan
Li Liu
Qingshan Liu
AAML
30
14
0
13 Dec 2023
Activation Gradient based Poisoned Sample Detection Against Backdoor
  Attacks
Activation Gradient based Poisoned Sample Detection Against Backdoor Attacks
Danni Yuan
Shaokui Wei
Mingda Zhang
Li Liu
Baoyuan Wu
AAML
40
5
0
11 Dec 2023
BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense
  with Backdoor Exclusivity Lifting
BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense with Backdoor Exclusivity Lifting
Huming Qiu
Junjie Sun
Mi Zhang
Xudong Pan
Min Yang
AAML
34
4
0
08 Dec 2023
Previous
12345
Next