Communities
Connect sessions
AI calendar
Organizations
Contact Sales
Search
Open menu
Home
Papers
2505.15389
Cited By
v1
v2
v3 (latest)
Are Vision-Language Models Safe in the Wild? A Meme-Based Benchmark Study
21 May 2025
DongGeon Lee
Joonwon Jang
Jihae Jeong
Hwanjo Yu
Re-assign community
ArXiv (abs)
PDF
HTML
HuggingFace (8 upvotes)
Github (3★)
Papers citing
"Are Vision-Language Models Safe in the Wild? A Meme-Based Benchmark Study"
7 / 7 papers shown
Title
Jailbreaking on Text-to-Video Models via Scene Splitting Strategy
Wonjun Lee
Haon Park
Doehyeon Lee
Bumsub Ham
Suhyun Kim
8
0
0
26 Sep 2025
Better Safe Than Sorry? Overreaction Problem of Vision Language Models in Visual Emergency Recognition
Dasol Choi
Seunghyun Lee
Youngsook Song
139
2
0
21 May 2025
Output Constraints as Attack Surface: Exploiting Structured Generation to Bypass LLM Safety Mechanisms
Shuoming Zhang
Jiacheng Zhao
Ruiyuan Xu
Xiaobing Feng
Huimin Cui
AAML
127
5
0
31 Mar 2025
Exploiting Prefix-Tree in Structured Output Interfaces for Enhancing Jailbreak Attacking
Yanzeng Li
Yunfan Xiong
Jialun Zhong
Jinchao Zhang
Jie Zhou
Lei Zou
81
3
0
20 Feb 2025
SafeDialBench: A Fine-Grained Safety Benchmark for Large Language Models in Multi-Turn Dialogues with Diverse Jailbreak Attacks
Hongye Cao
Yanming Wang
Sijia Jing
Ziyue Peng
Zhixin Bai
...
Yang Gao
Fanyu Meng
Xi Yang
Chao Deng
Junlan Feng
AAML
224
4
0
16 Feb 2025
Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack
M. Russinovich
Ahmed Salem
Ronen Eldan
227
140
0
02 Apr 2024
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts
Yichen Gong
Delong Ran
Jinyuan Liu
Conglei Wang
Tianshuo Cong
Anyu Wang
Sisi Duan
Xiaoyun Wang
MLLM
372
222
0
09 Nov 2023
1