ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.02233
  4. Cited By
Visual Concept Connectome (VCC): Open World Concept Discovery and their
  Interlayer Connections in Deep Models

Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models

2 April 2024
M. Kowal
Richard P. Wildes
Konstantinos G. Derpanis
    GNN
ArXivPDFHTML

Papers citing "Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models"

13 / 13 papers shown
Title
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions
Tue Cao
Nhat X. Hoang
Hieu H. Pham
P. Nguyen
My T. Thai
67
0
0
22 Feb 2025
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
91
7
0
06 Feb 2025
Visual Large Language Models for Generalized and Specialized Applications
Yifan Li
Zhixin Lai
Wentao Bao
Zhen Tan
Anh Dao
Kewei Sui
Jiayi Shen
Dong Liu
Huan Liu
Yu Kong
VLM
83
10
0
06 Jan 2025
OMENN: One Matrix to Explain Neural Networks
OMENN: One Matrix to Explain Neural Networks
Adam Wróbel
Mikołaj Janusz
Bartosz Zieliñski
Dawid Rymarczyk
FAtt
AAML
70
0
0
03 Dec 2024
Explainable and Interpretable Multimodal Large Language Models: A
  Comprehensive Survey
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey
Yunkai Dang
Kaichen Huang
Jiahao Huo
Yibo Yan
S. Huang
...
Kun Wang
Yong Liu
Jing Shao
Hui Xiong
Xuming Hu
LRM
96
14
0
03 Dec 2024
Decompose the model: Mechanistic interpretability in image models with
  Generalized Integrated Gradients (GIG)
Decompose the model: Mechanistic interpretability in image models with Generalized Integrated Gradients (GIG)
Yearim Kim
Sangyu Han
Sangbum Han
Nojun Kwak
40
0
0
03 Sep 2024
Concept-skill Transferability-based Data Selection for Large
  Vision-Language Models
Concept-skill Transferability-based Data Selection for Large Vision-Language Models
Jaewoo Lee
Boyang Li
Sung Ju Hwang
VLM
33
8
0
16 Jun 2024
Understanding Video Transformers via Universal Concept Discovery
Understanding Video Transformers via Universal Concept Discovery
M. Kowal
Achal Dave
Rares Ambrus
Adrien Gaidon
Konstantinos G. Derpanis
P. Tokmakov
ViT
27
2
0
19 Jan 2024
Toy Models of Superposition
Toy Models of Superposition
Nelson Elhage
Tristan Hume
Catherine Olsson
Nicholas Schiefer
T. Henighan
...
Sam McCandlish
Jared Kaplan
Dario Amodei
Martin Wattenberg
C. Olah
AAML
MILM
117
314
0
21 Sep 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
157
181
0
03 Feb 2022
Deeply Explain CNN via Hierarchical Decomposition
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip H. S. Torr
FAtt
41
15
0
23 Jan 2022
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
120
293
0
17 Oct 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
286
4,143
0
23 Aug 2019
1