Individualized Deepfake Detection Exploiting Traces Due to Double Neural-Network Operations

In today's digital landscape, journalists urgently require tools to verify the authenticity of facial images and videos depicting specific public figures before incorporating them into news stories. Existing deepfake detectors are not optimized for this detection task when an image is associated with a specific and identifiable individual. This study focuses on the deepfake detection of facial images of individual public figures. We propose to condition the proposed detector on the identity of an identified individual, given the advantages revealed by our theory-driven simulations. While most detectors in the literature rely on perceptible or imperceptible artifacts present in deepfake facial images, we demonstrate that the detection performance can be improved by exploiting the idempotency property of neural networks. In our approach, the training process involves double neural-network operations where we pass an authentic image through a deepfake simulating network twice. Experimental results show that the proposed method improves the area under the curve (AUC) from 0.92 to 0.94 and reduces its standard deviation by 17%. To address the need for evaluating detection performance for individual public figures, we curated and publicly released a dataset of ~32k images featuring 45 public figures, as existing deepfake datasets do not meet this criterion.
View on arXiv@article{rahman2025_2312.08034, title={ Individualized Deepfake Detection Exploiting Traces Due to Double Neural-Network Operations }, author={ Mushfiqur Rahman and Runze Liu and Chau-Wai Wong and Huaiyu Dai }, journal={arXiv preprint arXiv:2312.08034}, year={ 2025 } }