Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.08577
Cited By
Test-Time Backdoor Attacks on Multimodal Large Language Models
13 February 2024
Dong Lu
Tianyu Pang
Chao Du
Qian Liu
Xianjun Yang
Min-Bin Lin
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Test-Time Backdoor Attacks on Multimodal Large Language Models"
14 / 14 papers shown
Title
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models
Zhaoyi Liu
Huan Zhang
AAML
68
0
0
25 Feb 2025
Topic-FlipRAG: Topic-Orientated Adversarial Opinion Manipulation Attacks to Retrieval-Augmented Generation Models
Y. Gong
Zhuo Chen
Miaokun Chen
Fengchang Yu
Wei-Tsung Lu
XiaoFeng Wang
Xiaozhong Liu
J. Liu
AAML
SILM
54
0
0
03 Feb 2025
Defending Multimodal Backdoored Models by Repulsive Visual Prompt Tuning
Zhifang Zhang
Shuo He
Bingquan Shen
Lei Feng
Lei Feng
AAML
34
0
0
29 Dec 2024
Backdooring Vision-Language Models with Out-Of-Distribution Data
Weimin Lyu
Jiachen Yao
Saumya Gupta
Lu Pang
Tao Sun
Lingjie Yi
Lijie Hu
Haibin Ling
Chao Chen
VLM
AAML
41
2
0
02 Oct 2024
Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models
Zhenyang Ni
Rui Ye
Yuxian Wei
Zhen Xiang
Yanfeng Wang
Siheng Chen
AAML
25
9
0
19 Apr 2024
On the Adversarial Robustness of Multi-Modal Foundation Models
Christian Schlarmann
Matthias Hein
AAML
90
84
0
21 Aug 2023
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Junnan Li
Dongxu Li
Silvio Savarese
Steven C. H. Hoi
VLM
MLLM
244
4,186
0
30 Jan 2023
Data Poisoning Attacks Against Multimodal Encoders
Ziqing Yang
Xinlei He
Zheng Li
Michael Backes
Mathias Humbert
Pascal Berrang
Yang Zhang
AAML
96
43
0
30 Sep 2022
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning
Lun Wang
Zaynah Javed
Xian Wu
Wenbo Guo
Xinyu Xing
D. Song
AAML
57
64
0
02 May 2021
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
253
4,735
0
24 Feb 2021
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
Yuanchun Li
Jiayi Hua
Haoyu Wang
Chunyang Chen
Yunxin Liu
FedML
SILM
70
74
0
18 Jan 2021
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles
Ranjie Duan
Xingjun Ma
Yisen Wang
James Bailey
•. A. K. Qin
Yun Yang
AAML
142
222
0
08 Mar 2020
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
165
274
0
06 Mar 2020
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
247
5,813
0
08 Jul 2016
1