ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.10612
  4. Cited By
Not All Prompts Are Secure: A Switchable Backdoor Attack Against
  Pre-trained Vision Transformers

Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers

17 May 2024
Shengyuan Yang
Jiawang Bai
Kuofeng Gao
Yong-Liang Yang
Yiming Li
Shu-Tao Xia
    AAML
    SILM
ArXivPDFHTML

Papers citing "Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers"

5 / 5 papers shown
Title
Context is the Key: Backdoor Attacks for In-Context Learning with Vision
  Transformers
Context is the Key: Backdoor Attacks for In-Context Learning with Vision Transformers
Gorka Abad
S. Picek
Lorenzo Cavallaro
A. Urbieta
SILM
30
0
0
06 Sep 2024
Diversity-Aware Meta Visual Prompting
Diversity-Aware Meta Visual Prompting
Qidong Huang
Xiaoyi Dong
Dongdong Chen
Weiming Zhang
Feifei Wang
Gang Hua
Neng H. Yu
VLM
VPVLM
36
52
0
14 Mar 2023
AdaptFormer: Adapting Vision Transformers for Scalable Visual
  Recognition
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
Shoufa Chen
Chongjian Ge
Zhan Tong
Jiangliu Wang
Yibing Song
Jue Wang
Ping Luo
138
631
0
26 May 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
136
65
0
04 May 2021
1