ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10547
59
1

Learning Interpretable Logic Rules from Deep Vision Models

13 March 2025
Chuqin Geng
Yuhe Jiang
Ziyu Zhao
Haolin Ye
Zhaoyue Wang
X. Si
    NAI
    FAtt
    VLM
ArXivPDFHTML
Abstract

We propose a general framework called VisionLogic to extract interpretable logic rules from deep vision models, with a focus on image classification tasks. Given any deep vision model that uses a fully connected layer as the output head, VisionLogic transforms neurons in the last layer into predicates and grounds them into vision concepts using causal validation. In this way, VisionLogic can provide local explanations for single images and global explanations for specific classes in the form of logic rules. Compared to existing interpretable visualization tools such as saliency maps, VisionLogic addresses several key challenges, including the lack of causal explanations, overconfidence in visualizations, and ambiguity in interpretation. VisionLogic also facilitates the study of visual concepts encoded by predicates, particularly how they behave under perturbation -- an area that remains underexplored in the field of hidden semantics. Apart from providing better visual explanations and insights into the visual concepts learned by the model, we show that VisionLogic retains most of the neural network's discriminative power in an interpretable and transparent manner. We envision it as a bridge between complex model behavior and human-understandable explanations, providing trustworthy and actionable insights for real-world applications.

View on arXiv
@article{geng2025_2503.10547,
  title={ Learning Interpretable Logic Rules from Deep Vision Models },
  author={ Chuqin Geng and Yuhe Jiang and Ziyu Zhao and Haolin Ye and Zhaoyue Wang and Xujie Si },
  journal={arXiv preprint arXiv:2503.10547},
  year={ 2025 }
}
Comments on this paper