5
0

Fair Deepfake Detectors Can Generalize

Harry Cheng
Ming-Hui Liu
Yangyang Guo
Tianyi Wang
Liqiang Nie
Mohan Kankanhalli
Main:9 Pages
5 Figures
Bibliography:5 Pages
4 Tables
Abstract

Deepfake detection models face two critical challenges: generalization to unseen manipulations and demographic fairness among population groups. However, existing approaches often demonstrate that these two objectives are inherently conflicting, revealing a trade-off between them. In this paper, we, for the first time, uncover and formally define a causal relationship between fairness and generalization. Building on the back-door adjustment, we show that controlling for confounders (data distribution and model capacity) enables improved generalization via fairness interventions. Motivated by this insight, we propose Demographic Attribute-insensitive Intervention Detection (DAID), a plug-and-play framework composed of: i) Demographic-aware data rebalancing, which employs inverse-propensity weighting and subgroup-wise feature normalization to neutralize distributional biases; and ii) Demographic-agnostic feature aggregation, which uses a novel alignment loss to suppress sensitive-attribute signals. Across three cross-domain benchmarks, DAID consistently achieves superior performance in both fairness and generalization compared to several state-of-the-art detectors, validating both its theoretical foundation and practical effectiveness.

View on arXiv
@article{cheng2025_2507.02645,
  title={ Fair Deepfake Detectors Can Generalize },
  author={ Harry Cheng and Ming-Hui Liu and Yangyang Guo and Tianyi Wang and Liqiang Nie and Mohan Kankanhalli },
  journal={arXiv preprint arXiv:2507.02645},
  year={ 2025 }
}
Comments on this paper