ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18286
35
0

CO-SPY: Combining Semantic and Pixel Features to Detect Synthetic Images by AI

24 March 2025
Siyuan Cheng
Lingjuan Lyu
Zhenting Wang
X. Zhang
Vikash Sehwag
ArXivPDFHTML
Abstract

With the rapid advancement of generative AI, it is now possible to synthesize high-quality images in a few seconds. Despite the power of these technologies, they raise significant concerns regarding misuse. Current efforts to distinguish between real and AI-generated images may lack generalization, being effective for only certain types of generative models and susceptible to post-processing techniques like JPEG compression. To overcome these limitations, we propose a novel framework, Co-Spy, that first enhances existing semantic features (e.g., the number of fingers in a hand) and artifact features (e.g., pixel value differences), and then adaptively integrates them to achieve more general and robust synthetic image detection. Additionally, we create Co-Spy-Bench, a comprehensive dataset comprising 5 real image datasets and 22 state-of-the-art generative models, including the latest models like FLUX. We also collect 50k synthetic images in the wild from the Internet to enable evaluation in a more practical setting. Our extensive evaluations demonstrate that our detector outperforms existing methods under identical training conditions, achieving an average accuracy improvement of approximately 11% to 34%. The code is available atthis https URL.

View on arXiv
@article{cheng2025_2503.18286,
  title={ CO-SPY: Combining Semantic and Pixel Features to Detect Synthetic Images by AI },
  author={ Siyuan Cheng and Lingjuan Lyu and Zhenting Wang and Xiangyu Zhang and Vikash Sehwag },
  journal={arXiv preprint arXiv:2503.18286},
  year={ 2025 }
}
Comments on this paper