ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.04246
  4. Cited By
Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning
v1v2 (latest)

Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning

14 November 2016
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
ArXiv (abs)PDFHTML

Papers citing "Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning"

36 / 36 papers shown
Title
Enhancing Pre-trained Representation Classifiability can Boost its Interpretability
Enhancing Pre-trained Representation Classifiability can Boost its InterpretabilityInternational Conference on Learning Representations (ICLR), 2025
Shufan Shen
Zhaobo Qi
Junshu Sun
Qingming Huang
Qi Tian
Shuhui Wang
FAtt
376
4
0
28 Oct 2025
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional InteractionsInternational Conference on Learning Representations (ICLR), 2025
Tue Cao
Nhat X. Hoang
Hieu H. Pham
P. Nguyen
My T. Thai
493
2
0
22 Feb 2025
A Neurosymbolic Framework for Bias Correction in CNNs
A Neurosymbolic Framework for Bias Correction in CNNs
Parth Padalkar
Natalia Slusarz
Ekaterina Komendantskaya
Gopal Gupta
227
0
0
24 May 2024
Using Logic Programming and Kernel-Grouping for Improving
  Interpretability of Convolutional Neural Networks
Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks
Parth Padalkar
Gopal Gupta
132
3
0
19 Oct 2023
NeSyFOLD: Neurosymbolic Framework for Interpretable Image Classification
NeSyFOLD: Neurosymbolic Framework for Interpretable Image Classification
Parth Padalkar
Huaduo Wang
Gopal Gupta
136
2
0
30 Jan 2023
AutoProtoNet: Interpretability for Prototypical Networks
AutoProtoNet: Interpretability for Prototypical Networks
Pedro Sandoval Segura
W. Lawson
73
2
0
02 Apr 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AIACM Computing Surveys (ACM CSUR), 2022
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELMXAI
547
548
0
20 Jan 2022
Finding Representative Interpretations on Convolutional Neural Networks
Finding Representative Interpretations on Convolutional Neural NetworksIEEE International Conference on Computer Vision (ICCV), 2021
P. C. Lam
Lingyang Chu
Maxim Torgonskiy
Jian Pei
Yong Zhang
Lanjun Wang
FAttSSLHAI
159
7
0
13 Aug 2021
Explainability-aided Domain Generalization for Image Classification
Explainability-aided Domain Generalization for Image Classification
Robin M. Schmidt
FAttOOD
167
2
0
05 Apr 2021
Demystifying Deep Neural Networks Through Interpretation: A Survey
Demystifying Deep Neural Networks Through Interpretation: A Survey
Giang Dao
Minwoo Lee
FaMLFAtt
200
1
0
13 Dec 2020
ERIC: Extracting Relations Inferred from Convolutions
ERIC: Extracting Relations Inferred from ConvolutionsAsian Conference on Computer Vision (ACCV), 2020
Joe Townsend
Theodoros Kasioumis
Hiroya Inakoshi
NAIFAtt
155
16
0
19 Oct 2020
What do CNN neurons learn: Visualization & Clustering
What do CNN neurons learn: Visualization & Clustering
Haoyue Dai
SSL
88
0
0
18 Oct 2020
Explainability in Deep Reinforcement Learning
Explainability in Deep Reinforcement Learning
Alexandre Heuillet
Fabien Couthouis
Natalia Díaz Rodríguez
XAI
781
313
0
15 Aug 2020
Training Interpretable Convolutional Neural Networks by Differentiating
  Class-specific Filters
Training Interpretable Convolutional Neural Networks by Differentiating Class-specific FiltersEuropean Conference on Computer Vision (ECCV), 2020
Haoyun Liang
Zhihao Ouyang
Yuyuan Zeng
Hang Su
Zihao He
Shutao Xia
Jun Zhu
Bo Zhang
240
50
0
16 Jul 2020
Learning a functional control for high-frequency finance
Learning a functional control for high-frequency finance
Laura Leal
Mathieu Laurière
Charles-Albert Lehalle
AIFin
118
22
0
17 Jun 2020
Explainable Artificial Intelligence: a Systematic Review
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
551
300
0
29 May 2020
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Zixuan Huang
Yin Li
275
155
0
21 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the UninitiatedJournal of Artificial Intelligence Research (JAIR), 2020
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAMLXAI
354
415
0
30 Apr 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AIInformation Fusion (Inf. Fusion), 2019
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
778
7,413
0
22 Oct 2019
A game method for improving the interpretability of convolution neural
  network
A game method for improving the interpretability of convolution neural network
Jinwei Zhao
Qizhou Wang
Fuqiang Zhang
Wanli Qiu
Yufei Wang
Yu Liu
Guo Xie
Weigang Ma
Bin Wang
Xinhong Hei
AI4CE
135
0
0
21 Oct 2019
Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions
Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions
Alexandros Stergiou
G. Kapidis
Grigorios Kalliatakis
C. Chrysoulas
R. Veltkamp
R. Poppe
FAtt
153
50
0
04 Feb 2019
Interpretable CNNs for Object Classification
Interpretable CNNs for Object Classification
Quanshi Zhang
Xin Eric Wang
Ying Nian Wu
Huilin Zhou
Song-Chun Zhu
136
62
0
08 Jan 2019
Explanatory Graphs for CNNs
Explanatory Graphs for CNNs
Quanshi Zhang
Xin Eric Wang
Ruiming Cao
Ying Nian Wu
Feng Shi
Song-Chun Zhu
FAttGNN
110
3
0
18 Dec 2018
Mining Interpretable AOG Representations from Convolutional Networks via
  Active Question Answering
Mining Interpretable AOG Representations from Convolutional Networks via Active Question Answering
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
119
14
0
18 Dec 2018
Variational Saccading: Efficient Inference for Large Resolution Images
Variational Saccading: Efficient Inference for Large Resolution Images
Jason Ramapuram
M. Diephuis
Frantzeska Lavda
Russ Webb
Alexandros Kalousis
241
5
0
08 Dec 2018
Counterfactuals uncover the modular structure of deep generative models
Counterfactuals uncover the modular structure of deep generative models
M. Besserve
Arash Mehrjou
Rémy Sun
Bernhard Schölkopf
DRLBDLDiffM
237
108
0
08 Dec 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
933
2,088
0
31 May 2018
Unsupervised Learning of Neural Networks to Explain Neural Networks
Unsupervised Learning of Neural Networks to Explain Neural Networks
Quanshi Zhang
Yu Yang
Yuchen Liu
Ying Nian Wu
Song-Chun Zhu
FAttSSL
147
28
0
18 May 2018
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaMLHAI
336
877
0
02 Feb 2018
Interpreting CNNs via Decision Trees
Interpreting CNNs via Decision Trees
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
FAtt
247
342
0
01 Feb 2018
Controllable Top-down Feature Transformer
Controllable Top-down Feature Transformer
Zhiwei Jia
Haoshen Hong
Siyang Wang
Kwonjoon Lee
Zhuowen Tu
ViT
194
2
0
06 Dec 2017
Examining CNN Representations with respect to Dataset Bias
Examining CNN Representations with respect to Dataset BiasAAAI Conference on Artificial Intelligence (AAAI), 2017
Quanshi Zhang
Wenguan Wang
Song-Chun Zhu
SSLFAtt
189
110
0
29 Oct 2017
Interpretable Convolutional Neural Networks
Interpretable Convolutional Neural Networks
Quanshi Zhang
Ying Nian Wu
Song-Chun Zhu
FAtt
371
824
0
02 Oct 2017
Interpreting CNN Knowledge via an Explanatory Graph
Interpreting CNN Knowledge via an Explanatory Graph
Quanshi Zhang
Ruiming Cao
Feng Shi
Ying Nian Wu
Song-Chun Zhu
FAttGNNSSL
202
251
0
05 Aug 2017
Interactively Transferring CNN Patterns for Part Localization
Interactively Transferring CNN Patterns for Part Localization
Quanshi Zhang
Ruiming Cao
Shengming Zhang
Mark Edmonds
Ying Nian Wu
Song-Chun Zhu
140
16
0
05 Aug 2017
Mining Object Parts from CNNs via Active Question-Answering
Mining Object Parts from CNNs via Active Question-Answering
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
163
26
0
11 Apr 2017
1