Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.10612
Cited By
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers
17 May 2024
Shengyuan Yang
Jiawang Bai
Kuofeng Gao
Yong-Liang Yang
Yiming Li
Shu-Tao Xia
AAML
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers"
5 / 5 papers shown
Title
Context is the Key: Backdoor Attacks for In-Context Learning with Vision Transformers
Gorka Abad
S. Picek
Lorenzo Cavallaro
A. Urbieta
SILM
30
0
0
06 Sep 2024
Diversity-Aware Meta Visual Prompting
Qidong Huang
Xiaoyi Dong
Dongdong Chen
Weiming Zhang
Feifei Wang
Gang Hua
Neng H. Yu
VLM
VPVLM
36
52
0
14 Mar 2023
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
Shoufa Chen
Chongjian Ge
Zhan Tong
Jiangliu Wang
Yibing Song
Jue Wang
Ping Luo
138
631
0
26 May 2022
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
136
65
0
04 May 2021
1