ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.09205
  4. Cited By
Deeply Explain CNN via Hierarchical Decomposition

Deeply Explain CNN via Hierarchical Decomposition

International Journal of Computer Vision (IJCV), 2022
23 January 2022
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Juil Sock
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Deeply Explain CNN via Hierarchical Decomposition"

7 / 7 papers shown
CoE: Chain-of-Explanation via Automatic Visual Concept Circuit Description and Polysemanticity Quantification
CoE: Chain-of-Explanation via Automatic Visual Concept Circuit Description and Polysemanticity QuantificationComputer Vision and Pattern Recognition (CVPR), 2025
Wenlong Yu
Qilong Wang
Chuang Liu
Dong Li
Q. Hu
LRM
354
6
0
19 Mar 2025
SHA-CNN: Scalable Hierarchical Aware Convolutional Neural Network for
  Edge AI
SHA-CNN: Scalable Hierarchical Aware Convolutional Neural Network for Edge AI
N. Dhakad
Yuvnish Malhotra
Santosh Kumar Vishvakarma
Kaushik Roy
143
2
0
31 Jul 2024
DecomCAM: Advancing Beyond Saliency Maps through Decomposition and
  Integration
DecomCAM: Advancing Beyond Saliency Maps through Decomposition and Integration
Yuguang Yang
Runtang Guo
Shen-Te Wu
Yimi Wang
Linlin Yang
Bo Fan
Jilong Zhong
Juan Zhang
Baochang Zhang
VLM
159
5
0
29 May 2024
Visual Concept Connectome (VCC): Open World Concept Discovery and their
  Interlayer Connections in Deep Models
Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep ModelsComputer Vision and Pattern Recognition (CVPR), 2024
M. Kowal
Richard P. Wildes
Konstantinos G. Derpanis
GNN
370
19
0
02 Apr 2024
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant SubspacesIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
483
32
0
30 Dec 2022
Quantifying and Learning Static vs. Dynamic Information in Deep
  Spatiotemporal Networks
Quantifying and Learning Static vs. Dynamic Information in Deep Spatiotemporal NetworksIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
M. Kowal
Mennatullah Siam
Md. Amirul Islam
Neil D. B. Bruce
Richard P. Wildes
Konstantinos G. Derpanis
FAtt
257
8
0
03 Nov 2022
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
2.7K
21,148
0
16 Feb 2016
1
Page 1 of 1