23
0

Subjective Face Transform using Human First Impressions

Abstract

Humans tend to form quick subjective first impressions of non-physical attributes when seeing someone's face, such as perceived trustworthiness or attractiveness. To understand what variations in a face lead to different subjective impressions, this work uses generative models to find semantically meaningful edits to a face image that change perceived attributes. Unlike prior work that relied on statistical manipulation in feature space, our end-to-end framework considers trade-offs between preserving identity and changing perceptual attributes. It maps latent space directions to changes in attribute scores, enabling a perceptually significant identity-preserving transformation of any input face along an attribute axis according to a target change. We train on real and synthetic faces, evaluate for in-domain and out-of-domain images using predictive models and human ratings, demonstrating the generalizability of our approach. Ultimately, such a framework can be used to understand and explain trends and biases in subjective interpretation of faces that are not dependent on the subject's identity. This is demonstrated with improved model performance for first impression prediction when augmenting the training data with images generated by the proposed approach for a wider range of input to learn associations between face features and subjective attributes.

View on arXiv
@article{roygaga2025_2309.15381,
  title={ Subjective Face Transform using Human First Impressions },
  author={ Chaitanya Roygaga and Joshua Krinsky and Kai Zhang and Kenny Kwok and Aparna Bharati },
  journal={arXiv preprint arXiv:2309.15381},
  year={ 2025 }
}
Comments on this paper