ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.21861
29
1

HRGR: Enhancing Image Manipulation Detection via Hierarchical Region-aware Graph Reasoning

29 October 2024
Xudong Wang
Y. Li
Huiyu Zhou
Jiaran Zhou
Junyu Dong
ArXivPDFHTML
Abstract

Image manipulation detection is to identify the authenticity of each pixel in images. One typical approach to uncover manipulation traces is to model image correlations. The previous methods commonly adopt the grids, which are fixed-size squares, as graph nodes to model correlations. However, these grids, being independent of image content, struggle to retain local content coherence, resulting in imprecisethis http URLaddress this issue, we describe a new method named Hierarchical Region-aware Graph Reasoning (HRGR) to enhance image manipulation detection. Unlike existing grid-based methods, we model image correlations based on content-coherence feature regions with irregular shapes, generated by a novel Differentiable Feature Partition strategy. Then we construct a Hierarchical Region-aware Graph based on these regions within and across different feature layers. Subsequently, we describe a structural-agnostic graph reasoning strategy tailored for our graph to enhance the representation of nodes. Our method is fully differentiable and can seamlessly integrate into mainstream networks in an end-to-end manner, without requiring additional supervision. Extensive experiments demonstrate the effectiveness of our method in image manipulation detection, exhibiting its great potential as a plug-and-play component for existing architectures. Codes and models are available atthis https URL.

View on arXiv
@article{wang2025_2410.21861,
  title={ HRGR: Enhancing Image Manipulation Detection via Hierarchical Region-aware Graph Reasoning },
  author={ Xudong Wang and Jiaran Zhou and Huiyu Zhou and Junyu Dong and Yuezun Li },
  journal={arXiv preprint arXiv:2410.21861},
  year={ 2025 }
}
Comments on this paper