ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.05902
12
7

What's wrong with this video? Comparing Explainers for Deepfake Detection

12 May 2021
Samuele Pino
Mark J. Carman
Paolo Bestagini
    AAML
ArXivPDFHTML
Abstract

Deepfakes are computer manipulated videos where the face of an individual has been replaced with that of another. Software for creating such forgeries is easy to use and ever more popular, causing serious threats to personal reputation and public security. The quality of classifiers for detecting deepfakes has improved with the releasing of ever larger datasets, but the understanding of why a particular video has been labelled as fake has not kept pace. In this work we develop, extend and compare white-box, black-box and model-specific techniques for explaining the labelling of real and fake videos. In particular, we adapt SHAP, GradCAM and self-attention models to the task of explaining the predictions of state-of-the-art detectors based on EfficientNet, trained on the Deepfake Detection Challenge (DFDC) dataset. We compare the obtained explanations, proposing metrics to quantify their visual features and desirable characteristics, and also perform a user survey collecting users' opinions regarding the usefulness of the explainers.

View on arXiv
Comments on this paper