We evaluate the information that can unintentionally leak into the low dimensional output of a neural network, by reconstructing an input image from a 40- or 32-element feature vector that intends to only describe abstract attributes of a facial portrait. The reconstruction uses blackbox-access to the image encoder which generates the feature vector. Other than previous work, we leverage recent knowledge about image generation and facial similarity, implementing a method that outperforms the current state-of-the-art. Our strategy uses a pretrained StyleGAN and a new loss function that compares the perceptual similarity of portraits by mapping them into the latent space of a FaceNet embedding. Additionally, we present a new technique that fuses the output of an ensemble, to deliberately generate specific aspects of the recreated image.
View on arXiv@article{anderson2025_2503.09306, title={ Revealing Unintentional Information Leakage in Low-Dimensional Facial Portrait Representations }, author={ Kathleen Anderson and Thomas Martinetz }, journal={arXiv preprint arXiv:2503.09306}, year={ 2025 } }