ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.04246
  4. Cited By
Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning
v1v2 (latest)

Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning

14 November 2016
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
ArXiv (abs)PDFHTML

Papers citing "Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning"

36 / 36 papers shown
Enhancing Pre-trained Representation Classifiability can Boost its Interpretability
Enhancing Pre-trained Representation Classifiability can Boost its InterpretabilityInternational Conference on Learning Representations (ICLR), 2025
Shufan Shen
Zhaobo Qi
Junshu Sun
Qingming Huang
Qi Tian
Shuhui Wang
FAtt
414
4
0
28 Oct 2025
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional InteractionsInternational Conference on Learning Representations (ICLR), 2025
Tue Cao
Nhat X. Hoang
Hieu H. Pham
P. Nguyen
My T. Thai
531
2
0
22 Feb 2025
A Neurosymbolic Framework for Bias Correction in CNNs
A Neurosymbolic Framework for Bias Correction in CNNs
Parth Padalkar
Natalia Slusarz
Ekaterina Komendantskaya
Gopal Gupta
267
0
0
24 May 2024
Using Logic Programming and Kernel-Grouping for Improving
  Interpretability of Convolutional Neural Networks
Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks
Parth Padalkar
Gopal Gupta
148
3
0
19 Oct 2023
NeSyFOLD: Neurosymbolic Framework for Interpretable Image Classification
NeSyFOLD: Neurosymbolic Framework for Interpretable Image Classification
Parth Padalkar
Huaduo Wang
Gopal Gupta
164
2
0
30 Jan 2023
AutoProtoNet: Interpretability for Prototypical Networks
AutoProtoNet: Interpretability for Prototypical Networks
Pedro Sandoval Segura
W. Lawson
81
2
0
02 Apr 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AIACM Computing Surveys (ACM CSUR), 2022
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELMXAI
602
561
0
20 Jan 2022
Finding Representative Interpretations on Convolutional Neural Networks
Finding Representative Interpretations on Convolutional Neural NetworksIEEE International Conference on Computer Vision (ICCV), 2021
P. C. Lam
Lingyang Chu
Maxim Torgonskiy
Jian Pei
Yong Zhang
Lanjun Wang
FAttSSLHAI
175
7
0
13 Aug 2021
Explainability-aided Domain Generalization for Image Classification
Explainability-aided Domain Generalization for Image Classification
Robin M. Schmidt
FAttOOD
193
2
0
05 Apr 2021
Demystifying Deep Neural Networks Through Interpretation: A Survey
Demystifying Deep Neural Networks Through Interpretation: A Survey
Giang Dao
Minwoo Lee
FaMLFAtt
244
1
0
13 Dec 2020
ERIC: Extracting Relations Inferred from Convolutions
ERIC: Extracting Relations Inferred from ConvolutionsAsian Conference on Computer Vision (ACCV), 2020
Joe Townsend
Theodoros Kasioumis
Hiroya Inakoshi
NAIFAtt
159
16
0
19 Oct 2020
What do CNN neurons learn: Visualization & Clustering
What do CNN neurons learn: Visualization & Clustering
Haoyue Dai
SSL
100
0
0
18 Oct 2020
Explainability in Deep Reinforcement Learning
Explainability in Deep Reinforcement Learning
Alexandre Heuillet
Fabien Couthouis
Natalia Díaz Rodríguez
XAI
848
317
0
15 Aug 2020
Training Interpretable Convolutional Neural Networks by Differentiating
  Class-specific Filters
Training Interpretable Convolutional Neural Networks by Differentiating Class-specific FiltersEuropean Conference on Computer Vision (ECCV), 2020
Haoyun Liang
Zhihao Ouyang
Yuyuan Zeng
Hang Su
Zihao He
Shutao Xia
Jun Zhu
Bo Zhang
288
50
0
16 Jul 2020
Learning a functional control for high-frequency finance
Learning a functional control for high-frequency finance
Laura Leal
Mathieu Laurière
Charles-Albert Lehalle
AIFin
122
22
0
17 Jun 2020
Explainable Artificial Intelligence: a Systematic Review
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
598
300
0
29 May 2020
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Zixuan Huang
Yin Li
283
156
0
21 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the UninitiatedJournal of Artificial Intelligence Research (JAIR), 2020
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAMLXAI
433
417
0
30 Apr 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AIInformation Fusion (Inf. Fusion), 2019
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
877
7,501
0
22 Oct 2019
A game method for improving the interpretability of convolution neural
  network
A game method for improving the interpretability of convolution neural network
Jinwei Zhao
Qizhou Wang
Fuqiang Zhang
Wanli Qiu
Yufei Wang
Yu Liu
Guo Xie
Weigang Ma
Bin Wang
Xinhong Hei
AI4CE
144
0
0
21 Oct 2019
Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions
Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions
Alexandros Stergiou
G. Kapidis
Grigorios Kalliatakis
C. Chrysoulas
R. Veltkamp
R. Poppe
FAtt
166
50
0
04 Feb 2019
Interpretable CNNs for Object Classification
Interpretable CNNs for Object Classification
Quanshi Zhang
Xin Eric Wang
Ying Nian Wu
Huilin Zhou
Song-Chun Zhu
142
62
0
08 Jan 2019
Explanatory Graphs for CNNs
Explanatory Graphs for CNNs
Quanshi Zhang
Xin Eric Wang
Ruiming Cao
Ying Nian Wu
Feng Shi
Song-Chun Zhu
FAttGNN
110
3
0
18 Dec 2018
Mining Interpretable AOG Representations from Convolutional Networks via
  Active Question Answering
Mining Interpretable AOG Representations from Convolutional Networks via Active Question Answering
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
123
14
0
18 Dec 2018
Variational Saccading: Efficient Inference for Large Resolution Images
Variational Saccading: Efficient Inference for Large Resolution Images
Jason Ramapuram
M. Diephuis
Frantzeska Lavda
Russ Webb
Alexandros Kalousis
254
5
0
08 Dec 2018
Counterfactuals uncover the modular structure of deep generative models
Counterfactuals uncover the modular structure of deep generative models
M. Besserve
Arash Mehrjou
Rémy Sun
Bernhard Schölkopf
DRLBDLDiffM
259
108
0
08 Dec 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
1.1K
2,112
0
31 May 2018
Unsupervised Learning of Neural Networks to Explain Neural Networks
Unsupervised Learning of Neural Networks to Explain Neural Networks
Quanshi Zhang
Yu Yang
Yuchen Liu
Ying Nian Wu
Song-Chun Zhu
FAttSSL
167
27
0
18 May 2018
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaMLHAI
360
879
0
02 Feb 2018
Interpreting CNNs via Decision Trees
Interpreting CNNs via Decision Trees
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
FAtt
271
342
0
01 Feb 2018
Controllable Top-down Feature Transformer
Controllable Top-down Feature Transformer
Zhiwei Jia
Haoshen Hong
Siyang Wang
Kwonjoon Lee
Zhuowen Tu
ViT
198
2
0
06 Dec 2017
Examining CNN Representations with respect to Dataset Bias
Examining CNN Representations with respect to Dataset BiasAAAI Conference on Artificial Intelligence (AAAI), 2017
Quanshi Zhang
Wenguan Wang
Song-Chun Zhu
SSLFAtt
202
110
0
29 Oct 2017
Interpretable Convolutional Neural Networks
Interpretable Convolutional Neural Networks
Quanshi Zhang
Ying Nian Wu
Song-Chun Zhu
FAtt
399
829
0
02 Oct 2017
Interpreting CNN Knowledge via an Explanatory Graph
Interpreting CNN Knowledge via an Explanatory Graph
Quanshi Zhang
Ruiming Cao
Feng Shi
Ying Nian Wu
Song-Chun Zhu
FAttGNNSSL
215
252
0
05 Aug 2017
Interactively Transferring CNN Patterns for Part Localization
Interactively Transferring CNN Patterns for Part Localization
Quanshi Zhang
Ruiming Cao
Shengming Zhang
Mark Edmonds
Ying Nian Wu
Song-Chun Zhu
144
16
0
05 Aug 2017
Mining Object Parts from CNNs via Active Question-Answering
Mining Object Parts from CNNs via Active Question-Answering
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
164
26
0
11 Apr 2017
1