27
0

A Brief Review for Compression and Transfer Learning Techniques in DeepFake Detection

Abstract

Training and deploying deepfake detection models on edge devices offers the advantage of maintaining data privacy and confidentiality by processing it close to its source. However, this approach is constrained by the limited computational and memory resources available at the edge. To address this challenge, we explore compression techniques to reduce computational demands and inference time, alongside transfer learning methods to minimize training overhead. Using the Synthbuster, RAISE, and ForenSynths datasets, we evaluate the effectiveness of pruning, knowledge distillation (KD), quantization, fine-tuning, and adapter-based techniques. Our experimental results demonstrate that both compression and transfer learning can be effectively achieved, even with a high compression level of 90%, remaining at the same performance level when the training and validation data originate from the same DeepFake model. However, when the testing dataset is generated by DeepFake models not present in the training set, a domain generalization issue becomes evident.

View on arXiv
@article{karathanasis2025_2504.21066,
  title={ A Brief Review for Compression and Transfer Learning Techniques in DeepFake Detection },
  author={ Andreas Karathanasis and John Violos and Ioannis Kompatsiaris and Symeon Papadopoulos },
  journal={arXiv preprint arXiv:2504.21066},
  year={ 2025 }
}
Comments on this paper