ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.16567
235
0
v1v2 (latest)

V-CECE: Visual Counterfactual Explanations via Conceptual Edits

20 September 2025
Nikolaos Spanos
Maria Lymperaiou
Giorgos Filandrianos
Konstantinos Thomas
Athanasios Voulodimos
Giorgos Stamou
ArXiv (abs)PDFHTML
Main:10 Pages
14 Figures
Bibliography:3 Pages
10 Tables
Appendix:9 Pages
Abstract

Recent black-box counterfactual generation frameworks fail to take into account the semantic content of the proposed edits, while relying heavily on training to guide the generation process. We propose a novel, plug-and-play black-box counterfactual generation framework, which suggests step-by-step edits based on theoretical guarantees of optimal edits to produce human-level counterfactual explanations with zero training. Our framework utilizes a pre-trained image editing diffusion model, and operates without access to the internals of the classifier, leading to an explainable counterfactual generation process. Throughout our experimentation, we showcase the explanatory gap between human reasoning and neural model behavior by utilizing both Convolutional Neural Network (CNN), Vision Transformer (ViT) and Large Vision Language Model (LVLM) classifiers, substantiated through a comprehensive human evaluation.

View on arXiv
Comments on this paper