166
v1v2 (latest)

LayLens: Improving Deepfake Understanding through Simplified Explanations

International Conference on Multimodal Interaction (ICMI), 2025
Main:2 Pages
5 Figures
Bibliography:1 Pages
Abstract

This demonstration paper presents LayLens\mathbf{LayLens}, a tool aimed to make deepfake understanding easier for users of all educational backgrounds. While prior works often rely on outputs containing technical jargon, LayLens bridges the gap between model reasoning and human understanding through a three-stage pipeline: (1) explainable deepfake detection using a state-of-the-art forgery localization model, (2) natural language simplification of technical explanations using a vision-language model, and (3) visual reconstruction of a plausible original image via guided image editing. The interface presents both technical and layperson-friendly explanations in addition to a side-by-side comparison of the uploaded and reconstructed images. A user study with 15 participants shows that simplified explanations significantly improve clarity and reduce cognitive load, with most users expressing increased confidence in identifying deepfakes. LayLens offers a step toward transparent, trustworthy, and user-centric deepfake forensics.

View on arXiv
Comments on this paper