Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.06625
Cited By
ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time
9 October 2024
Yi Ding
Bolian Li
Ruqi Zhang
MLLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time"
4 / 4 papers shown
Title
Safety in Large Reasoning Models: A Survey
Cheng Wang
Y. Liu
B. Li
Duzhen Zhang
Z. Li
Junfeng Fang
LRM
38
1
0
24 Apr 2025
VLMGuard-R1: Proactive Safety Alignment for VLMs via Reasoning-Driven Prompt Optimization
Menglan Chen
Xianghe Pang
Jingjing Dong
Wenhao Wang
Yaxin Du
Siheng Chen
LRM
22
0
0
17 Apr 2025
Safety Mirage: How Spurious Correlations Undermine VLM Safety Fine-tuning
Yiwei Chen
Yuguang Yao
Yihua Zhang
Bingquan Shen
Gaowen Liu
Sijia Liu
AAML
MU
47
1
0
14 Mar 2025
Understanding and Rectifying Safety Perception Distortion in VLMs
Xiaohan Zou
Jian Kang
George Kesidis
Lu Lin
84
0
0
18 Feb 2025
1