ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15389
  4. Cited By
Are Vision-Language Models Safe in the Wild? A Meme-Based Benchmark Study
v1v2v3 (latest)

Are Vision-Language Models Safe in the Wild? A Meme-Based Benchmark Study

21 May 2025
DongGeon Lee
Joonwon Jang
Jihae Jeong
Hwanjo Yu
ArXiv (abs)PDFHTMLHuggingFace (8 upvotes)Github (3★)

Papers citing "Are Vision-Language Models Safe in the Wild? A Meme-Based Benchmark Study"

7 / 7 papers shown
Title
Jailbreaking on Text-to-Video Models via Scene Splitting Strategy
Jailbreaking on Text-to-Video Models via Scene Splitting Strategy
Wonjun Lee
Haon Park
Doehyeon Lee
Bumsub Ham
Suhyun Kim
8
0
0
26 Sep 2025
Better Safe Than Sorry? Overreaction Problem of Vision Language Models in Visual Emergency Recognition
Better Safe Than Sorry? Overreaction Problem of Vision Language Models in Visual Emergency Recognition
Dasol Choi
Seunghyun Lee
Youngsook Song
139
2
0
21 May 2025
Output Constraints as Attack Surface: Exploiting Structured Generation to Bypass LLM Safety Mechanisms
Output Constraints as Attack Surface: Exploiting Structured Generation to Bypass LLM Safety Mechanisms
Shuoming Zhang
Jiacheng Zhao
Ruiyuan Xu
Xiaobing Feng
Huimin Cui
AAML
127
5
0
31 Mar 2025
Exploiting Prefix-Tree in Structured Output Interfaces for Enhancing Jailbreak Attacking
Exploiting Prefix-Tree in Structured Output Interfaces for Enhancing Jailbreak Attacking
Yanzeng Li
Yunfan Xiong
Jialun Zhong
Jinchao Zhang
Jie Zhou
Lei Zou
81
3
0
20 Feb 2025
SafeDialBench: A Fine-Grained Safety Benchmark for Large Language Models in Multi-Turn Dialogues with Diverse Jailbreak Attacks
SafeDialBench: A Fine-Grained Safety Benchmark for Large Language Models in Multi-Turn Dialogues with Diverse Jailbreak Attacks
Hongye Cao
Yanming Wang
Sijia Jing
Ziyue Peng
Zhixin Bai
...
Yang Gao
Fanyu Meng
Xi Yang
Chao Deng
Junlan Feng
AAML
224
4
0
16 Feb 2025
Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack
Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack
M. Russinovich
Ahmed Salem
Ronen Eldan
227
140
0
02 Apr 2024
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts
Yichen Gong
Delong Ran
Jinyuan Liu
Conglei Wang
Tianshuo Cong
Anyu Wang
Sisi Duan
Xiaoyun Wang
MLLM
372
222
0
09 Nov 2023
1