ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.06660
  4. Cited By
Weight Poisoning Attacks on Pre-trained Models

Weight Poisoning Attacks on Pre-trained Models

14 April 2020
Keita Kurita
Paul Michel
Graham Neubig
    AAML
    SILM
ArXivPDFHTML

Papers citing "Weight Poisoning Attacks on Pre-trained Models"

17 / 67 papers shown
Title
Backdoor Pre-trained Models Can Transfer to All
Backdoor Pre-trained Models Can Transfer to All
Lujia Shen
S. Ji
Xuhong Zhang
Jinfeng Li
Jing Chen
Jie Shi
Chengfang Fang
Jianwei Yin
Ting Wang
AAML
SILM
31
117
0
30 Oct 2021
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text
  Style Transfer
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Fanchao Qi
Yangyi Chen
Xurui Zhang
Mukai Li
Zhiyuan Liu
Maosong Sun
AAML
SILM
77
175
0
14 Oct 2021
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation
  Models
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models
Kangjie Chen
Yuxian Meng
Xiaofei Sun
Shangwei Guo
Tianwei Zhang
Jiwei Li
Chun Fan
SILM
18
105
0
06 Oct 2021
How to Inject Backdoors with Better Consistency: Logit Anchoring on
  Clean Data
How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data
Zhiyuan Zhang
Lingjuan Lyu
Weiqiang Wang
Lichao Sun
Xu Sun
21
34
0
03 Sep 2021
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Linyang Li
Demin Song
Xiaonan Li
Jiehang Zeng
Ruotian Ma
Xipeng Qiu
22
133
0
31 Aug 2021
The Devil is in the GAN: Backdoor Attacks and Defenses in Deep
  Generative Models
The Devil is in the GAN: Backdoor Attacks and Defenses in Deep Generative Models
Ambrish Rawat
Killian Levacher
M. Sinn
AAML
22
10
0
03 Aug 2021
Poisoning Deep Reinforcement Learning Agents with In-Distribution
  Triggers
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers
C. Ashcraft
Kiran Karra
15
22
0
14 Jun 2021
Topological Detection of Trojaned Neural Networks
Topological Detection of Trojaned Neural Networks
Songzhu Zheng
Yikai Zhang
H. Wagner
Mayank Goswami
Chao Chen
AAML
16
40
0
11 Jun 2021
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word
  Substitution
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution
Fanchao Qi
Yuan Yao
Sophia Xu
Zhiyuan Liu
Maosong Sun
SILM
20
126
0
11 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
18
47
0
03 Jun 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
X. Zhang
AAML
8
8
0
16 Mar 2021
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
27
31
0
16 Dec 2020
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
Fanchao Qi
Yangyi Chen
Mukai Li
Yuan Yao
Zhiyuan Liu
Maosong Sun
AAML
11
261
0
20 Nov 2020
Concealed Data Poisoning Attacks on NLP Models
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
11
18
0
23 Oct 2020
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Siddhant Garg
Adarsh Kumar
Vibhor Goel
Yingyu Liang
AAML
34
86
0
04 Aug 2020
Model-Reuse Attacks on Deep Learning Systems
Model-Reuse Attacks on Deep Learning Systems
Yujie Ji
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
SILM
AAML
125
186
0
02 Dec 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
243
11,677
0
09 Mar 2017
Previous
12