CeTAD: Towards Certified Toxicity-Aware Distance in Vision Language Models

Recent advances in large vision-language models (VLMs) have demonstrated remarkable success across a wide range of visual understanding tasks. However, the robustness of these models against jailbreak attacks remains an open challenge. In this work, we propose a universal certified defence framework to safeguard VLMs rigorously against potential visual jailbreak attacks. First, we proposed a novel distance metric to quantify semantic discrepancies between malicious and intended responses, capturing subtle differences often overlooked by conventional cosine similarity-based measures. Then, we devise a regressed certification approach that employs randomized smoothing to provide formal robustness guarantees against both adversarial and structural perturbations, even under black-box settings. Complementing this, our feature-space defence introduces noise distributions (e.g., Gaussian, Laplacian) into the latent embeddings to safeguard against both pixel-level and structure-level perturbations. Our results highlight the potential of a formally grounded, integrated strategy toward building more resilient and trustworthy VLMs.
View on arXiv@article{yin2025_2503.10661, title={ CeTAD: Towards Certified Toxicity-Aware Distance in Vision Language Models }, author={ Xiangyu Yin and Jiaxu Liu and Zhen Chen and Jinwei Hu and Yi Dong and Xiaowei Huang and Wenjie Ruan }, journal={arXiv preprint arXiv:2503.10661}, year={ 2025 } }