ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20405
69
0

What Changed? Detecting and Evaluating Instruction-Guided Image Edits with Multimodal Large Language Models

26 May 2025
Lorenzo Baraldi
Davide Bucciarelli
Federico Betti
Marcella Cornia
Lorenzo Baraldi
N. Sebe
Rita Cucchiara
ArXivPDFHTML
Abstract

Instruction-based image editing models offer increased personalization opportunities in generative tasks. However, properly evaluating their results is challenging, and most of the existing metrics lag in terms of alignment with human judgment and explainability. To tackle these issues, we introduce DICE (DIfference Coherence Estimator), a model designed to detect localized differences between the original and the edited image and to assess their relevance to the given modification request. DICE consists of two key components: a difference detector and a coherence estimator, both built on an autoregressive Multimodal Large Language Model (MLLM) and trained using a strategy that leverages self-supervision, distillation from inpainting networks, and full supervision. Through extensive experiments, we evaluate each stage of our pipeline, comparing different MLLMs within the proposed framework. We demonstrate that DICE effectively identifies coherent edits, effectively evaluating images generated by different editing models with a strong correlation with human judgment. We publicly release our source code, models, and data.

View on arXiv
@article{baraldi2025_2505.20405,
  title={ What Changed? Detecting and Evaluating Instruction-Guided Image Edits with Multimodal Large Language Models },
  author={ Lorenzo Baraldi and Davide Bucciarelli and Federico Betti and Marcella Cornia and Lorenzo Baraldi and Nicu Sebe and Rita Cucchiara },
  journal={arXiv preprint arXiv:2505.20405},
  year={ 2025 }
}
Comments on this paper