ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.08577
  4. Cited By
Test-Time Backdoor Attacks on Multimodal Large Language Models

Test-Time Backdoor Attacks on Multimodal Large Language Models

13 February 2024
Dong Lu
Tianyu Pang
Chao Du
Qian Liu
Xianjun Yang
Min Lin
    AAML
ArXiv (abs)PDFHTMLGithub (61★)

Papers citing "Test-Time Backdoor Attacks on Multimodal Large Language Models"

23 / 23 papers shown
Semantic Router: On the Feasibility of Hijacking MLLMs via a Single Adversarial Perturbation
Semantic Router: On the Feasibility of Hijacking MLLMs via a Single Adversarial Perturbation
Changyue Li
Jiaying Li
Youliang Yuan
Jiaming He
Zhicong Huang
Pinjia He
AAML
303
0
0
25 Nov 2025
Text Prompt Injection of Vision Language Models
Text Prompt Injection of Vision Language Models
Ruizhe Zhu
SILMVLM
409
2
0
10 Oct 2025
TokenSwap: Backdoor Attack on the Compositional Understanding of Large Vision-Language Models
TokenSwap: Backdoor Attack on the Compositional Understanding of Large Vision-Language Models
Zhifang Zhang
Qiqi Tao
Jiaqi Lv
Na Zhao
Bingquan Shen
Joey Tianyi Zhou
190
1
0
29 Sep 2025
Cowpox: Towards the Immunity of VLM-based Multi-Agent Systems
Cowpox: Towards the Immunity of VLM-based Multi-Agent Systems
Yutong Wu
Jie Zhang
Yiming Li
Chao Zhang
Qing Guo
Nils Lukas
Tianwei Zhang
AAML
186
1
0
12 Aug 2025
X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
Hanxun Huang
Sarah Monazam Erfani
Yige Li
Jiabo He
James Bailey
AAML
590
15
0
08 May 2025
BadToken: Token-level Backdoor Attacks to Multi-modal Large Language Models
BadToken: Token-level Backdoor Attacks to Multi-modal Large Language ModelsComputer Vision and Pattern Recognition (CVPR), 2025
Zenghui Yuan
Jiawen Shi
Pan Zhou
Neil Zhenqiang Gong
Lichao Sun
AAML
513
12
0
20 Mar 2025
Survey of Adversarial Robustness in Multimodal Large Language Models
Survey of Adversarial Robustness in Multimodal Large Language Models
Chengze Jiang
Zhuangzhuang Wang
Minjing Dong
Jie Gui
AAML
368
12
0
18 Mar 2025
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language ModelsComputer Vision and Pattern Recognition (CVPR), 2025
Zhaoyi Liu
Huan Zhang
AAML
799
13
0
25 Feb 2025
DeepSeek on a Trip: Inducing Targeted Visual Hallucinations via Representation Vulnerabilities
DeepSeek on a Trip: Inducing Targeted Visual Hallucinations via Representation Vulnerabilities
Chashi Mahiul Islam
Samuel Jacob Chacko
Preston Horne
Xiuwen Liu
439
3
0
11 Feb 2025
Topic-FlipRAG: Topic-Orientated Adversarial Opinion Manipulation Attacks to Retrieval-Augmented Generation Models
Topic-FlipRAG: Topic-Orientated Adversarial Opinion Manipulation Attacks to Retrieval-Augmented Generation Models
Jiawei Liu
Zhuo Chen
Miaokun Chen
Fengchang Yu
Fan Zhang
Luyi Xing
Wei Lu
Jing Liu
AAMLSILM
670
11
0
03 Feb 2025
B-AVIBench: Towards Evaluating the Robustness of Large Vision-Language Model on Black-box Adversarial Visual-Instructions
B-AVIBench: Towards Evaluating the Robustness of Large Vision-Language Model on Black-box Adversarial Visual-InstructionsIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2024
Hao Zhang
Wenqi Shao
Hong Liu
Yongqiang Ma
Ping Luo
Yu Qiao
Kaipeng Zhang
Jianchao Tan
VLMAAML
244
32
0
31 Dec 2024
Defending Multimodal Backdoored Models by Repulsive Visual Prompt Tuning
Defending Multimodal Backdoored Models by Repulsive Visual Prompt Tuning
Zhifang Zhang
Shuo He
Bingquan Shen
Bingquan Shen
Lei Feng
AAML
690
10
0
29 Dec 2024
SoK: The Security-Safety Continuum of Multimodal Foundation Models through Information Flow and Global Game-Theoretic Analysis of Asymmetric Threats
Ruoxi Sun
Jiamin Chang
Hammond Pearce
Chaowei Xiao
B. Li
Qi Wu
Surya Nepal
Minhui Xue
798
0
0
17 Nov 2024
Exploring Response Uncertainty in MLLMs: An Empirical Evaluation under Misleading Scenarios
Exploring Response Uncertainty in MLLMs: An Empirical Evaluation under Misleading Scenarios
Yunkai Dang
Mengxi Gao
Yibo Yan
Xin Zou
Yanggan Gu
...
Jingyu Wang
Peijie Jiang
Aiwei Liu
Jia Liu
Xuming Hu
471
12
0
05 Nov 2024
Replace-then-Perturb: Targeted Adversarial Attacks With Visual Reasoning
  for Vision-Language Models
Replace-then-Perturb: Targeted Adversarial Attacks With Visual Reasoning for Vision-Language Models
Jonggyu Jang
Hyeonsu Lyu
Jungyeon Koh
H. Yang
VLMAAML
321
0
0
01 Nov 2024
Backdooring Vision-Language Models with Out-Of-Distribution Data
Backdooring Vision-Language Models with Out-Of-Distribution DataInternational Conference on Learning Representations (ICLR), 2024
Weimin Lyu
Jiachen Yao
Saumya Gupta
Lu Pang
Tao Sun
Lingjie Yi
Lijie Hu
Haibin Ling
Chao Chen
VLMAAML
536
20
0
02 Oct 2024
TrojVLM: Backdoor Attack Against Vision Language Models
TrojVLM: Backdoor Attack Against Vision Language ModelsEuropean Conference on Computer Vision (ECCV), 2024
Weimin Lyu
Lu Pang
Tengfei Ma
Haibin Ling
Chao Chen
MLLM
326
31
0
28 Sep 2024
A Survey of Attacks on Large Vision-Language Models: Resources,
  Advances, and Future Trends
A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future Trends
Daizong Liu
Mingyu Yang
Xiaoye Qu
Pan Zhou
Yu Cheng
Wei Hu
ELMAAML
390
92
0
10 Jul 2024
BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of
  Large Language Models
BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of Large Language Models
Jiaqi Xue
Meng Zheng
Yebowen Hu
Fei Liu
Xun Chen
Qian Lou
AAMLSILM
526
69
0
03 Jun 2024
Physical Backdoor Attack can Jeopardize Driving with
  Vision-Large-Language Models
Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models
Zhenyang Ni
Rui Ye
Yuxian Wei
Zhen Xiang
Yanfeng Wang
Siheng Chen
AAML
357
31
0
19 Apr 2024
Unbridled Icarus: A Survey of the Potential Perils of Image Inputs in
  Multimodal Large Language Model Security
Unbridled Icarus: A Survey of the Potential Perils of Image Inputs in Multimodal Large Language Model Security
Yihe Fan
Yuxin Cao
Ziyu Zhao
Ziyao Liu
Shaofeng Li
257
21
0
08 Apr 2024
Backdoor Attacks and Countermeasures in Natural Language Processing
  Models: A Comprehensive Security Review
Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security ReviewIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023
Pengzhou Cheng
Zongru Wu
Wei Du
Haodong Zhao
Wei Lu
Gongshen Liu
SILMAAML
798
54
0
12 Sep 2023
Defending against Backdoor Attack on Deep Neural Networks
Defending against Backdoor Attack on Deep Neural Networks
Kaidi Xu
Sijia Liu
Pin-Yu Chen
Pu Zhao
Xinyu Lin
Xue Lin
AAML
409
58
0
26 Feb 2020
1
Page 1 of 1