ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.09838
  4. Cited By
Transformer Interpretability Beyond Attention Visualization

Transformer Interpretability Beyond Attention Visualization

17 December 2020
Hila Chefer
Shir Gur
Lior Wolf
ArXivPDFHTML

Papers citing "Transformer Interpretability Beyond Attention Visualization"

48 / 98 papers shown
Title
VISION DIFFMASK: Faithful Interpretation of Vision Transformers with
  Differentiable Patch Masking
VISION DIFFMASK: Faithful Interpretation of Vision Transformers with Differentiable Patch Masking
A. Nalmpantis
Apostolos Panagiotopoulos
John Gkountouras
Konstantinos Papakostas
Wilker Aziz
10
4
0
13 Apr 2023
Computational modeling of semantic change
Computational modeling of semantic change
Nina Tahmasebi
Haim Dubossarsky
26
6
0
13 Apr 2023
Towards Evaluating Explanations of Vision Transformers for Medical
  Imaging
Towards Evaluating Explanations of Vision Transformers for Medical Imaging
Piotr Komorowski
Hubert Baniecki
P. Biecek
MedIm
31
27
0
12 Apr 2023
Sector Patch Embedding: An Embedding Module Conforming to The Distortion
  Pattern of Fisheye Image
Sector Patch Embedding: An Embedding Module Conforming to The Distortion Pattern of Fisheye Image
Dian Yang
Jiadong Tang
Yu Gao
Yi Yang
M. Fu
18
1
0
26 Mar 2023
PAMI: partition input and aggregate outputs for model interpretation
PAMI: partition input and aggregate outputs for model interpretation
Wei Shi
Wentao Zhang
Weishi Zheng
Ruixuan Wang
FAtt
22
3
0
07 Feb 2023
X-ReID: Cross-Instance Transformer for Identity-Level Person
  Re-Identification
X-ReID: Cross-Instance Transformer for Identity-Level Person Re-Identification
Leqi Shen
Tao He
Yuchen Guo
Guiguang Ding
39
5
0
04 Feb 2023
Holistically Explainable Vision Transformers
Holistically Explainable Vision Transformers
Moritz D Boehle
Mario Fritz
Bernt Schiele
ViT
31
9
0
20 Jan 2023
Opti-CAM: Optimizing saliency maps for interpretability
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
28
22
0
17 Jan 2023
Dynamic Grained Encoder for Vision Transformers
Dynamic Grained Encoder for Vision Transformers
Lin Song
Songyang Zhang
Songtao Liu
Zeming Li
Xuming He
Hongbin Sun
Jian-jun Sun
Nanning Zheng
ViT
21
34
0
10 Jan 2023
CLIP-ReID: Exploiting Vision-Language Model for Image Re-Identification
  without Concrete Text Labels
CLIP-ReID: Exploiting Vision-Language Model for Image Re-Identification without Concrete Text Labels
Siyuan Li
Li Sun
Qingli Li
VLM
28
148
0
25 Nov 2022
ViT-CX: Causal Explanation of Vision Transformers
ViT-CX: Causal Explanation of Vision Transformers
Weiyan Xie
Xiao-hui Li
Caleb Chen Cao
Nevin L.Zhang
ViT
21
17
0
06 Nov 2022
Exploring Self-Attention for Crop-type Classification Explainability
Exploring Self-Attention for Crop-type Classification Explainability
Ivica Obadic
R. Roscher
Dario Augusto Borges Oliveira
Xiao Xiang Zhu
22
7
0
24 Oct 2022
Delving into Masked Autoencoders for Multi-Label Thorax Disease
  Classification
Delving into Masked Autoencoders for Multi-Label Thorax Disease Classification
Junfei Xiao
Yutong Bai
Alan Yuille
Zongwei Zhou
MedIm
ViT
30
58
0
23 Oct 2022
Multi-Scale Wavelet Transformer for Face Forgery Detection
Multi-Scale Wavelet Transformer for Face Forgery Detection
Jie Liu
Jingjing Wang
Peng Zhang
Chunmao Wang
Di Xie
Shiliang Pu
ViT
CVBM
28
8
0
08 Oct 2022
Systematic Generalization and Emergent Structures in Transformers
  Trained on Structured Tasks
Systematic Generalization and Emergent Structures in Transformers Trained on Structured Tasks
Yuxuan Li
James L. McClelland
29
17
0
02 Oct 2022
Object-ABN: Learning to Generate Sharp Attention Maps for Action
  Recognition
Object-ABN: Learning to Generate Sharp Attention Maps for Action Recognition
Tomoya Nitta
Tsubasa Hirakawa
H. Fujiyoshi
Toru Tamaki
50
0
0
27 Jul 2022
Semantic Abstraction: Open-World 3D Scene Understanding from 2D
  Vision-Language Models
Semantic Abstraction: Open-World 3D Scene Understanding from 2D Vision-Language Models
Huy Ha
Shuran Song
LM&Ro
VLM
28
101
0
23 Jul 2022
TokenMix: Rethinking Image Mixing for Data Augmentation in Vision
  Transformers
TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers
Jihao Liu
B. Liu
Hang Zhou
Hongsheng Li
Yu Liu
ViT
10
66
0
18 Jul 2022
Explanation-based Counterfactual Retraining(XCR): A Calibration Method
  for Black-box Models
Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models
Liu Zhendong
Wenyu Jiang
Yan Zhang
Chongjun Wang
CML
6
0
0
22 Jun 2022
SP-ViT: Learning 2D Spatial Priors for Vision Transformers
SP-ViT: Learning 2D Spatial Priors for Vision Transformers
Yuxuan Zhou
Wangmeng Xiang
C. Li
Biao Wang
Xihan Wei
Lei Zhang
M. Keuper
Xia Hua
ViT
29
15
0
15 Jun 2022
Dual Decomposition of Convex Optimization Layers for Consistent
  Attention in Medical Images
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images
Tom Ron
M. Weiler-Sagie
Tamir Hazan
FAtt
MedIm
19
6
0
06 Jun 2022
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Hila Chefer
Idan Schwartz
Lior Wolf
ViT
27
37
0
02 Jun 2022
Attention Flows for General Transformers
Attention Flows for General Transformers
Niklas Metzger
Christopher Hahn
Julian Siber
Frederik Schmitt
Bernd Finkbeiner
34
0
0
30 May 2022
Towards Opening the Black Box of Neural Machine Translation: Source and
  Target Interpretations of the Transformer
Towards Opening the Black Box of Neural Machine Translation: Source and Target Interpretations of the Transformer
Javier Ferrando
Gerard I. Gállego
Belen Alastruey
Carlos Escolano
Marta R. Costa-jussá
22
44
0
23 May 2022
A graph-transformer for whole slide image classification
A graph-transformer for whole slide image classification
Yi Zheng
R. Gindra
Emily J. Green
E. Burks
Margrit Betke
J. Beane
V. Kolachalama
MedIm
34
120
0
19 May 2022
No Token Left Behind: Explainability-Aided Image Classification and
  Generation
No Token Left Behind: Explainability-Aided Image Classification and Generation
Roni Paiss
Hila Chefer
Lior Wolf
VLM
26
29
0
11 Apr 2022
POSTER: A Pyramid Cross-Fusion Transformer Network for Facial Expression
  Recognition
POSTER: A Pyramid Cross-Fusion Transformer Network for Facial Expression Recognition
Ce Zheng
Matías Mendieta
C. L. P. Chen
ViT
16
54
0
08 Apr 2022
VL-InterpreT: An Interactive Visualization Tool for Interpreting
  Vision-Language Transformers
VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision-Language Transformers
Estelle Aflalo
Meng Du
Shao-Yen Tseng
Yongfei Liu
Chenfei Wu
Nan Duan
Vasudev Lal
23
45
0
30 Mar 2022
CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot
  Object Navigation
CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation
S. Gadre
Mitchell Wortsman
Gabriel Ilharco
Ludwig Schmidt
Shuran Song
CLIP
LM&Ro
25
142
0
20 Mar 2022
WegFormer: Transformers for Weakly Supervised Semantic Segmentation
WegFormer: Transformers for Weakly Supervised Semantic Segmentation
Chunmeng Liu
Enze Xie
Wenjia Wang
Wenhai Wang
Guangya Li
Ping Luo
ViT
22
6
0
16 Mar 2022
The Principle of Diversity: Training Stronger Vision Transformers Calls
  for Reducing All Levels of Redundancy
The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Tianlong Chen
Zhenyu (Allen) Zhang
Yu Cheng
Ahmed Hassan Awadallah
Zhangyang Wang
ViT
27
37
0
12 Mar 2022
Measuring the Mixing of Contextual Information in the Transformer
Measuring the Mixing of Contextual Information in the Transformer
Javier Ferrando
Gerard I. Gállego
Marta R. Costa-jussá
21
48
0
08 Mar 2022
Understanding microbiome dynamics via interpretable graph representation
  learning
Understanding microbiome dynamics via interpretable graph representation learning
K. Melnyk
Kuba Weimann
Tim Conrad
19
5
0
02 Mar 2022
XAI for Transformers: Better Explanations through Conservative
  Propagation
XAI for Transformers: Better Explanations through Conservative Propagation
Ameen Ali
Thomas Schnake
Oliver Eberle
G. Montavon
Klaus-Robert Muller
Lior Wolf
FAtt
15
86
0
15 Feb 2022
Video Transformers: A Survey
Video Transformers: A Survey
Javier Selva
A. S. Johansen
Sergio Escalera
Kamal Nasrollahi
T. Moeslund
Albert Clapés
ViT
20
102
0
16 Jan 2022
An empirical user-study of text-based nonverbal annotation systems for
  human-human conversations
An empirical user-study of text-based nonverbal annotation systems for human-human conversations
Joshua Y. Kim
K. Yacef
14
1
0
30 Dec 2021
Semantic Segmentation In-the-Wild Without Seeing Any Segmentation
  Examples
Semantic Segmentation In-the-Wild Without Seeing Any Segmentation Examples
Nir Zabari
Yedid Hoshen
VLM
20
26
0
06 Dec 2021
TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic
  Segmentation
TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation
Zhao-Heng Yin
Pichao Wang
Fan Wang
Xianzhe Xu
Hanling Zhang
Hao Li
Rong Jin
27
39
0
02 Dec 2021
Understanding How Encoder-Decoder Architectures Attend
Understanding How Encoder-Decoder Architectures Attend
Kyle Aitken
V. Ramasesh
Yuan Cao
Niru Maheswaranathan
18
17
0
28 Oct 2021
TransFER: Learning Relation-aware Facial Expression Representations with
  Transformers
TransFER: Learning Relation-aware Facial Expression Representations with Transformers
Fanglei Xue
Qiangchang Wang
G. Guo
ViT
31
183
0
25 Aug 2021
Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer
Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer
Yifan Xu
Zhijie Zhang
Mengdan Zhang
Kekai Sheng
Ke Li
Weiming Dong
Liqing Zhang
Changsheng Xu
Xing Sun
ViT
18
201
0
03 Aug 2021
Representation learning for neural population activity with Neural Data
  Transformers
Representation learning for neural population activity with Neural Data Transformers
Joel Ye
C. Pandarinath
AI4TS
AI4CE
11
51
0
02 Aug 2021
IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision
  Transformers
IA-RED2^22: Interpretability-Aware Redundancy Reduction for Vision Transformers
Bowen Pan
Rameswar Panda
Yifan Jiang
Zhangyang Wang
Rogerio Feris
A. Oliva
VLM
ViT
39
153
0
23 Jun 2021
KVT: k-NN Attention for Boosting Vision Transformers
KVT: k-NN Attention for Boosting Vision Transformers
Pichao Wang
Xue Wang
F. Wang
Ming Lin
Shuning Chang
Hao Li
R. L. Jin
ViT
32
105
0
28 May 2021
All Tokens Matter: Token Labeling for Training Better Vision
  Transformers
All Tokens Matter: Token Labeling for Training Better Vision Transformers
Zihang Jiang
Qibin Hou
Li-xin Yuan
Daquan Zhou
Yujun Shi
Xiaojie Jin
Anran Wang
Jiashi Feng
ViT
12
203
0
22 Apr 2021
Vision Transformer using Low-level Chest X-ray Feature Corpus for
  COVID-19 Diagnosis and Severity Quantification
Vision Transformer using Low-level Chest X-ray Feature Corpus for COVID-19 Diagnosis and Severity Quantification
Sangjoon Park
Gwanghyun Kim
Y. Oh
J. Seo
Sang Min Lee
Jin Hwan Kim
Sungjun Moon
Jae-Kwang Lim
Jong Chul Ye
ViT
MedIm
40
96
0
15 Apr 2021
Vision Transformer for COVID-19 CXR Diagnosis using Chest X-ray Feature
  Corpus
Vision Transformer for COVID-19 CXR Diagnosis using Chest X-ray Feature Corpus
Sangjoon Park
Gwanghyun Kim
Y. Oh
J. Seo
Sang Min Lee
Jin Hwan Kim
Sungjun Moon
Jae-Kwang Lim
J. C. Ye
ViT
MedIm
28
33
0
12 Mar 2021
A Decomposable Attention Model for Natural Language Inference
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
196
1,367
0
06 Jun 2016
Previous
12