49

Deepfake Labels Restore Reality, Especially for Those Who Dislike the Speaker

Abstract

Deepfake videos create dangerous possibilities for public misinformation. In this experiment (N=204), we investigated whether labeling videos as containing actual or deepfake statements from US President Biden helps participants later differentiate between true and fake information. People accurately recalled 93.8% of deepfake videos and 84.2% of actual videos, suggesting that labeling videos can help combat misinformation. Individuals who identify as Republican and had lower favorability ratings of Biden performed better in distinguishing between actual and deepfake videos, a result explained by the elaboration likelihood model (ELM), which predicts that people who distrust a message source will more critically evaluate the message.

View on arXiv
Comments on this paper