All Papers
Title |
|---|
Title |
|---|

Images manipulated by image editing tools can mislead viewers and pose significant risks to social security. However, accurately localizing manipulated image regions remains challenging due to the severe scarcity of high-quality annotated data, which is laborious to create. To address this, we propose a novel approach that mitigates data scarcity by leveraging readily available web data. We utilize a large collection of manually forged images from the web, as well as automatically generated annotations derived from a simpler auxiliary task, constrained image manipulationthis http URL, we introduce CAAAv2, a novel auto-annotation framework that operates on a category-aware, prior-feature-denoising paradigm that notably reduces task complexity. To further ensure annotation reliability, we propose QES, a novel metric that filters out low-quality annotations. Combining CAAAv2 and QES, we construct MIMLv2, a large-scale, diverse, and high-quality dataset containing 246,212 manually forged images with pixel-level mask annotations. This is over 120 times larger than existing handcrafted datasets like IMD20. Additionally, we introduce Object Jitter, a technique that further enhances model training by generating high-quality manipulation artifacts. Building on these advances, we develop Web-IML, a new model designed to effectively leverage web-scale supervision for the task of image manipulation localization. Extensive experiments demonstrate that our approach substantially alleviates the data scarcity problem and significantly improves the performance of various models on multiple real-world forgery benchmarks. With the proposed web supervision, our Web-IML achieves a striking performance gain of 31% and surpasses the previous state-of-the-art SparseViT by 21.6 average IoU points. The dataset and code will be released atthis https URL.
View on arXiv