37

Lingua-SafetyBench: A Benchmark for Safety Evaluation of Multilingual Vision-Language Models

Enyi Shi
Pengyang Shao
Yanxin Zhang
Chenhang Cui
Jiayi Lyu
Xu Xie
Xiaobo Xia
Fei Shen
Tat-Seng Chua
Main:8 Pages
36 Figures
Bibliography:3 Pages
7 Tables
Appendix:22 Pages
Abstract

Robust safety of vision-language large models (VLLMs) under joint multilingual and multimodal inputs remains underexplored. Existing benchmarks are typically multilingual but text-only, or multimodal but monolingual. Recent multilingual multimodal red-teaming efforts render harmful prompts into images, yet rely heavily on typography-style visuals and lack semantically grounded image-text pairs, limiting coverage of realistic cross-modal interactions. We introduce Lingua-SafetyBench, a benchmark of 100,440 harmful image-text pairs across 10 languages, explicitly partitioned into image-dominant and text-dominant subsets to disentangle risk sources. Evaluating 11 open-source VLLMs reveals a consistent asymmetry: image-dominant risks yield higher ASR in high-resource languages, while text-dominant risks are more severe in non-high-resource languages. A controlled study on the Qwen series shows that scaling and version upgrades reduce Attack Success Rate (ASR) overall but disproportionately benefit HRLs, widening the gap between HRLs and Non-HRLs under text-dominant risks. This underscores the necessity of language- and modality-aware safety alignment beyond merethis http URLfacilitate reproducibility and future research, we will publicly release our benchmark, model checkpoints, and sourcethis http URLcode and dataset will be available atthis https URLthis paper contains examples with unsafe content.

View on arXiv
Comments on this paper