Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.00928
Cited By
Quantifying Attention Flow in Transformers
2 May 2020
Samira Abnar
Willem H. Zuidema
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Quantifying Attention Flow in Transformers"
50 / 403 papers shown
Title
Being Right for Whose Right Reasons?
Terne Sasha Thorn Jakobsen
Laura Cabello
Anders Søgaard
29
10
0
01 Jun 2023
Solar Irradiance Anticipative Transformer
T. Mercier
Tasmiat Rahman
Amin Sabet
ViT
11
6
0
29 May 2023
Analysis over vision-based models for pedestrian action anticipation
Lina Achaji
Julien Moreau
François Aioun
François Charpillet
ViT
17
3
0
27 May 2023
Explainability Techniques for Chemical Language Models
Stefan Hödl
William Robinson
Yoram Bachrach
Wilhelm Huck
Tal Kachman
32
5
0
25 May 2023
VISIT: Visualizing and Interpreting the Semantic Information Flow of Transformers
Shahar Katz
Yonatan Belinkov
37
26
0
22 May 2023
Explaining How Transformers Use Context to Build Predictions
Javier Ferrando
Gerard I. Gállego
Ioannis Tsiamas
Marta R. Costa-jussá
32
31
0
21 May 2023
Interpretability at Scale: Identifying Causal Mechanisms in Alpaca
Zhengxuan Wu
Atticus Geiger
Thomas Icard
Christopher Potts
Noah D. Goodman
MILM
41
82
0
15 May 2023
Semantic Composition in Visually Grounded Language Models
Rohan Pandey
CoGe
26
1
0
15 May 2023
Cascaded Cross-Attention Networks for Data-Efficient Whole-Slide Image Classification Using Transformers
Firas Khader
Jakob Nikolas Kather
T. Han
S. Nebelung
Christiane Kuhl
Johannes Stegmaier
Daniel Truhn
MedIm
ViT
11
1
0
11 May 2023
Transformer-Based Model for Monocular Visual Odometry: A Video Understanding Approach
André O. Françani
Marcos R. O. A. Máximo
35
8
0
10 May 2023
Preserving Locality in Vision Transformers for Class Incremental Learning
Bowen Zheng
Da-Wei Zhou
Han-Jia Ye
De-Chuan Zhan
CLL
27
5
0
14 Apr 2023
VISION DIFFMASK: Faithful Interpretation of Vision Transformers with Differentiable Patch Masking
A. Nalmpantis
Apostolos Panagiotopoulos
John Gkountouras
Konstantinos Papakostas
Wilker Aziz
15
4
0
13 Apr 2023
Computational modeling of semantic change
Nina Tahmasebi
Haim Dubossarsky
34
6
0
13 Apr 2023
Towards Evaluating Explanations of Vision Transformers for Medical Imaging
Piotr Komorowski
Hubert Baniecki
P. Biecek
MedIm
33
27
0
12 Apr 2023
ViT-Calibrator: Decision Stream Calibration for Vision Transformer
Lin Chen
Zhijie Jia
Tian Qiu
Lechao Cheng
Jie Lei
Zunlei Feng
Min-Gyoo Song
26
1
0
10 Apr 2023
Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting
Syed Talal Wasim
Muzammal Naseer
Salman Khan
Fahad Shahbaz Khan
M. Shah
VLM
VPVLM
36
74
0
06 Apr 2023
Long-Short Temporal Co-Teaching for Weakly Supervised Video Anomaly Detection
Shengyang Sun
Xiaojin Gong
30
18
0
31 Mar 2023
What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions
Brian Chen
Nina Shvetsova
Andrew Rouditchenko
D. Kondermann
Samuel Thomas
Shih-Fu Chang
Rogerio Feris
James R. Glass
Hilde Kuehne
40
7
0
29 Mar 2023
Evaluating self-attention interpretability through human-grounded experimental protocol
Milan Bhan
Nina Achache
Victor Legrand
A. Blangero
Nicolas Chesneau
26
9
0
27 Mar 2023
Coupling Artificial Neurons in BERT and Biological Neurons in the Human Brain
Xu Liu
Mengyue Zhou
Gaosheng Shi
Yu Du
Lin Zhao
Zihao Wu
David Liu
Tianming Liu
Xintao Hu
39
10
0
27 Mar 2023
Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives
Shunsuke Kitada
FaML
HAI
AI4CE
32
0
0
24 Mar 2023
How Does Attention Work in Vision Transformers? A Visual Analytics Attempt
Yiran Li
Junpeng Wang
Xin Dai
Liang Wang
Chin-Chia Michael Yeh
Yan Zheng
Wei Zhang
Kwan-Liu Ma
ViT
20
23
0
24 Mar 2023
A New Benchmark: On the Utility of Synthetic Data with Blender for Bare Supervised Learning and Downstream Domain Adaptation
Hui Tang
Kui Jia
OOD
34
13
0
16 Mar 2023
MRET: Multi-resolution Transformer for Video Quality Assessment
Junjie Ke
Tian Zhang
Yilin Wang
P. Milanfar
Feng Yang
ViT
47
8
0
13 Mar 2023
Transformer-based World Models Are Happy With 100k Interactions
Jan Robine
Marc Höftmann
Tobias Uelwer
Stefan Harmeling
OffRL
21
68
0
13 Mar 2023
X-Pruner: eXplainable Pruning for Vision Transformers
Lu Yu
Wei Xiang
ViT
11
48
0
08 Mar 2023
Self-attention in Vision Transformers Performs Perceptual Grouping, Not Attention
Paria Mehrani
John K. Tsotsos
25
24
0
02 Mar 2023
Multi-Layer Attention-Based Explainability via Transformers for Tabular Data
Andrea Trevino Gavito
Diego Klabjan
J. Utke
LMTD
23
3
0
28 Feb 2023
Inseq: An Interpretability Toolkit for Sequence Generation Models
Gabriele Sarti
Nils Feldhus
Ludwig Sickert
Oskar van der Wal
Malvina Nissim
Arianna Bisazza
32
64
0
27 Feb 2023
Boosting Adversarial Transferability using Dynamic Cues
Muzammal Naseer
Ahmad A Mahmood
Salman Khan
Fahad Shahbaz Khan
AAML
28
5
0
23 Feb 2023
Scaling Vision Transformers to 22 Billion Parameters
Mostafa Dehghani
Josip Djolonga
Basil Mustafa
Piotr Padlewski
Jonathan Heek
...
Mario Luvcić
Xiaohua Zhai
Daniel Keysers
Jeremiah Harmsen
N. Houlsby
MLLM
61
570
0
10 Feb 2023
V1T: large-scale mouse V1 response prediction using a Vision Transformer
Bryan M. Li
I. M. Cornacchia
Nathalie L Rochefort
A. Onken
26
8
0
06 Feb 2023
Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Maps
Goro Kobayashi
Tatsuki Kuribayashi
Sho Yokoi
Kentaro Inui
28
14
0
01 Feb 2023
Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models
Hila Chefer
Yuval Alaluf
Yael Vinker
Lior Wolf
Daniel Cohen-Or
DiffM
73
497
0
31 Jan 2023
Fairness-aware Vision Transformer via Debiased Self-Attention
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
71
8
0
31 Jan 2023
Quantifying Context Mixing in Transformers
Hosein Mohebbi
Willem H. Zuidema
Grzegorz Chrupała
A. Alishahi
168
24
0
30 Jan 2023
Tagging before Alignment: Integrating Multi-Modal Tags for Video-Text Retrieval
Yizhen Chen
Jie Wang
Lijian Lin
Zhongang Qi
Jin Ma
Ying Shan
VLM
30
18
0
30 Jan 2023
Fully transformer-based biomarker prediction from colorectal cancer histology: a large-scale multicentric study
S. J. Wagner
Daniel Reisenbüchler
N. West
J. Niehues
G. P. Veldhuizen
...
Daniel Truhn
J. Schnabel
Melanie Boxberg
T. Peng
Jakob Nikolas Kather
3DV
MedIm
18
14
0
23 Jan 2023
Holistically Explainable Vision Transformers
Moritz D Boehle
Mario Fritz
Bernt Schiele
ViT
38
9
0
20 Jan 2023
AtMan: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation
Bjorn Deiseroth
Mayukh Deb
Samuel Weinbach
Manuel Brack
P. Schramowski
Kristian Kersting
24
23
0
19 Jan 2023
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
36
22
0
17 Jan 2023
Learning to Exploit Temporal Structure for Biomedical Vision-Language Processing
Shruthi Bannur
Stephanie L. Hyland
Qianchu Liu
Fernando Pérez-García
Maximilian Ilse
...
Maria T. A. Wetscherek
M. Lungren
A. Nori
Javier Alvarez-Valle
Ozan Oktay
36
110
0
11 Jan 2023
Universal Multimodal Representation for Language Understanding
Zhuosheng Zhang
Kehai Chen
Rui Wang
Masao Utiyama
Eiichiro Sumita
Z. Li
Hai Zhao
SSL
19
21
0
09 Jan 2023
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
41
2
0
03 Jan 2023
Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Rohan Pandey
Rulin Shao
Paul Pu Liang
Ruslan Salakhutdinov
Louis-Philippe Morency
29
12
0
20 Dec 2022
Medical Diagnosis with Large Scale Multimodal Transformers: Leveraging Diverse Data for More Accurate Diagnosis
Firas Khader
Gustav Mueller-Franzes
Tian Wang
T. Han
Soroosh Tayebi Arasteh
...
Keno Bressem
Christiane Kuhl
S. Nebelung
Jakob Nikolas Kather
Daniel Truhn
16
6
0
18 Dec 2022
Rethinking Cooking State Recognition with Vision Transformers
A. Khan
Alif Ashrafee
Reeshoon Sayera
Shahriar Ivan
Sabbir Ahmed
ViT
24
7
0
16 Dec 2022
Explainability of Text Processing and Retrieval Methods: A Critical Survey
Sourav Saha
Debapriyo Majumdar
Mandar Mitra
18
5
0
14 Dec 2022
Task Bias in Vision-Language Models
Sachit Menon
I. Chandratreya
Carl Vondrick
VLM
SSL
22
6
0
08 Dec 2022
On the Importance of Clinical Notes in Multi-modal Learning for EHR Data
Severin Husmann
Hugo Yèche
Gunnar Rätsch
Rita Kuznetsova
HAI
16
10
0
06 Dec 2022
Previous
1
2
3
4
5
6
7
8
9
Next