ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.10529
  4. Cited By
Safeguarding Vision-Language Models Against Patched Visual Prompt
  Injectors

Safeguarding Vision-Language Models Against Patched Visual Prompt Injectors

17 May 2024
Jiachen Sun
Changsheng Wang
Jiong Wang
Yiwei Zhang
Chaowei Xiao
    AAML
    VLM
ArXivPDFHTML

Papers citing "Safeguarding Vision-Language Models Against Patched Visual Prompt Injectors"

4 / 4 papers shown
Title
Safeguarding Vision-Language Models: Mitigating Vulnerabilities to Gaussian Noise in Perturbation-based Attacks
Safeguarding Vision-Language Models: Mitigating Vulnerabilities to Gaussian Noise in Perturbation-based Attacks
Jiawei Wang
Yushen Zuo
Yuanjun Chai
Z. Liu
Yichen Fu
Yichun Feng
Kin-Man Lam
AAML
VLM
40
0
0
02 Apr 2025
Poisoned-MRAG: Knowledge Poisoning Attacks to Multimodal Retrieval Augmented Generation
Yinuo Liu
Zenghui Yuan
Guiyao Tie
Jiawen Shi
Lichao Sun
Lichao Sun
Neil Zhenqiang Gong
38
1
0
08 Mar 2025
DensePure: Understanding Diffusion Models towards Adversarial Robustness
DensePure: Understanding Diffusion Models towards Adversarial Robustness
Chaowei Xiao
Zhongzhu Chen
Kun Jin
Jiong Wang
Weili Nie
Mingyan D. Liu
Anima Anandkumar
Bo-wen Li
D. Song
DiffM
30
35
0
01 Nov 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
308
11,909
0
04 Mar 2022
1