ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.19212
24
0

CapsFake: A Multimodal Capsule Network for Detecting Instruction-Guided Deepfakes

27 April 2025
Tuan Nguyen
Naseem Khan
Issa Khalil
    AAML
ArXivPDFHTML
Abstract

The rapid evolution of deepfake technology, particularly in instruction-guided image editing, threatens the integrity of digital images by enabling subtle, context-aware manipulations. Generated conditionally from real images and textual prompts, these edits are often imperceptible to both humans and existing detection systems, revealing significant limitations in current defenses. We propose a novel multimodal capsule network, CapsFake, designed to detect such deepfake image edits by integrating low-level capsules from visual, textual, and frequency-domain modalities. High-level capsules, predicted through a competitive routing mechanism, dynamically aggregate local features to identify manipulated regions with precision. Evaluated on diverse datasets, including MagicBrush, Unsplash Edits, Open Images Edits, and Multi-turn Edits, CapsFake outperforms state-of-the-art methods by up to 20% in detection accuracy. Ablation studies validate its robustness, achieving detection rates above 94% under natural perturbations and 96% against adversarial attacks, with excellent generalization to unseen editing scenarios. This approach establishes a powerful framework for countering sophisticated image manipulations.

View on arXiv
@article{nguyen2025_2504.19212,
  title={ CapsFake: A Multimodal Capsule Network for Detecting Instruction-Guided Deepfakes },
  author={ Tuan Nguyen and Naseem Khan and Issa Khalil },
  journal={arXiv preprint arXiv:2504.19212},
  year={ 2025 }
}
Comments on this paper