178

CapRecover: A Cross-Modality Feature Inversion Attack Framework on Vision Language Models

Main:8 Pages
7 Figures
Bibliography:1 Pages
7 Tables
Abstract

As Vision-Language Models (VLMs) are increasingly deployed in split-DNN configurations--with visual encoders (e.g., ResNet, ViT) operating on user devices and sending intermediate features to the cloud--there is a growing privacy risk from semantic information leakage. Existing approaches to reconstructing images from these intermediate features often result in blurry, semantically ambiguous images. To directly address semantic leakage, we propose CapRecover, a cross-modality inversion framework that recovers high-level semantic content, such as labels or captions, directly from intermediate features without image reconstruction.

View on arXiv
Comments on this paper