ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.03286
  4. Cited By
GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole
  Encoder Layer in Transformers

GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers

6 May 2022
Ali Modarressi
Mohsen Fayyaz
Yadollah Yaghoobzadeh
Mohammad Taher Pilehvar
    ViT
ArXivPDFHTML

Papers citing "GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers"

27 / 27 papers shown
Title
Are We Paying Attention to Her? Investigating Gender Disambiguation and Attention in Machine Translation
Are We Paying Attention to Her? Investigating Gender Disambiguation and Attention in Machine Translation
Chiara Manna
Afra Alishahi
Frédéric Blain
Eva Vanmassenhove
22
0
0
13 May 2025
From Text to Graph: Leveraging Graph Neural Networks for Enhanced Explainability in NLP
From Text to Graph: Leveraging Graph Neural Networks for Enhanced Explainability in NLP
Fabio Yáñez-Romero
Andrés Montoyo
Armando Suárez
Yoan Gutiérrez
Ruslan Mitkov
41
0
0
02 Apr 2025
Collapse of Dense Retrievers: Short, Early, and Literal Biases Outranking Factual Evidence
Mohsen Fayyaz
Ali Modarressi
Hinrich Schuetze
Nanyun Peng
52
0
0
06 Mar 2025
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
L. Arras
Bruno Puri
Patrick Kahardipraja
Sebastian Lapuschkin
Wojciech Samek
35
0
0
21 Feb 2025
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Tobias Leemann
Alina Fastowski
Felix Pfeiffer
Gjergji Kasneci
51
4
0
10 Jan 2025
How Language Models Prioritize Contextual Grammatical Cues?
How Language Models Prioritize Contextual Grammatical Cues?
Hamidreza Amirzadeh
A. Alishahi
Hosein Mohebbi
21
0
0
04 Oct 2024
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi
Yadollah Yaghoobzadeh
32
0
0
21 Aug 2024
Exploring the Plausibility of Hate and Counter Speech Detectors with
  Explainable AI
Exploring the Plausibility of Hate and Counter Speech Detectors with Explainable AI
Adrian Jaques Böck
D. Slijepcevic
Matthias Zeppelzauer
42
0
0
25 Jul 2024
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
36
1
0
23 Jul 2024
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Mohsen Fayyaz
Fan Yin
Jiao Sun
Nanyun Peng
52
3
0
28 Jun 2024
InternalInspector $I^2$: Robust Confidence Estimation in LLMs through
  Internal States
InternalInspector I2I^2I2: Robust Confidence Estimation in LLMs through Internal States
Mohammad Beigi
Ying Shen
Runing Yang
Zihao Lin
Qifan Wang
Ankith Mohan
Jianfeng He
Ming Jin
Chang-Tien Lu
Lifu Huang
HILM
34
4
0
17 Jun 2024
An Unsupervised Approach to Achieve Supervised-Level Explainability in
  Healthcare Records
An Unsupervised Approach to Achieve Supervised-Level Explainability in Healthcare Records
Joakim Edin
Maria Maistro
Lars Maaløe
Lasse Borgholt
Jakob Drachmann Havtorn
Tuukka Ruotsalo
FAtt
27
2
0
13 Jun 2024
Unveiling and Manipulating Prompt Influence in Large Language Models
Unveiling and Manipulating Prompt Influence in Large Language Models
Zijian Feng
Hanzhang Zhou
Zixiao Zhu
Junlang Qian
Kezhi Mao
37
2
0
20 May 2024
Isotropy, Clusters, and Classifiers
Isotropy, Clusters, and Classifiers
Timothee Mickus
Stig-Arne Gronroos
Joseph Attieh
19
0
0
05 Feb 2024
From Understanding to Utilization: A Survey on Explainability for Large
  Language Models
From Understanding to Utilization: A Survey on Explainability for Large Language Models
Haoyan Luo
Lucia Specia
48
20
0
23 Jan 2024
Better Explain Transformers by Illuminating Important Information
Better Explain Transformers by Illuminating Important Information
Linxin Song
Yan Cui
Ao Luo
Freddy Lecue
Irene Z Li
FAtt
20
1
0
18 Jan 2024
A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia
A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia
Giovanni Monea
Maxime Peyrard
Martin Josifoski
Vishrav Chaudhary
Jason Eisner
Emre Kiciman
Hamid Palangi
Barun Patra
Robert West
KELM
51
12
0
04 Dec 2023
Homophone Disambiguation Reveals Patterns of Context Mixing in Speech
  Transformers
Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers
Hosein Mohebbi
Grzegorz Chrupała
Willem H. Zuidema
A. Alishahi
28
12
0
15 Oct 2023
Why bother with geometry? On the relevance of linear decompositions of
  Transformer embeddings
Why bother with geometry? On the relevance of linear decompositions of Transformer embeddings
Timothee Mickus
Raúl Vázquez
15
2
0
10 Oct 2023
DecompX: Explaining Transformers Decisions by Propagating Token
  Decomposition
DecompX: Explaining Transformers Decisions by Propagating Token Decomposition
Ali Modarressi
Mohsen Fayyaz
Ehsan Aghazadeh
Yadollah Yaghoobzadeh
Mohammad Taher Pilehvar
25
25
0
05 Jun 2023
Centering the Margins: Outlier-Based Identification of Harmed
  Populations in Toxicity Detection
Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection
Vyoma Raman
Eve Fleisig
Dan Klein
19
0
0
24 May 2023
Computational modeling of semantic change
Computational modeling of semantic change
Nina Tahmasebi
Haim Dubossarsky
26
6
0
13 Apr 2023
Inseq: An Interpretability Toolkit for Sequence Generation Models
Inseq: An Interpretability Toolkit for Sequence Generation Models
Gabriele Sarti
Nils Feldhus
Ludwig Sickert
Oskar van der Wal
Malvina Nissim
Arianna Bisazza
25
64
0
27 Feb 2023
Analyzing Feed-Forward Blocks in Transformers through the Lens of
  Attention Maps
Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Maps
Goro Kobayashi
Tatsuki Kuribayashi
Sho Yokoi
Kentaro Inui
23
14
0
01 Feb 2023
Quantifying Context Mixing in Transformers
Quantifying Context Mixing in Transformers
Hosein Mohebbi
Willem H. Zuidema
Grzegorz Chrupała
A. Alishahi
164
24
0
30 Jan 2023
Measuring the Mixing of Contextual Information in the Transformer
Measuring the Mixing of Contextual Information in the Transformer
Javier Ferrando
Gerard I. Gállego
Marta R. Costa-jussá
21
48
0
08 Mar 2022
Incorporating Residual and Normalization Layers into Analysis of Masked
  Language Models
Incorporating Residual and Normalization Layers into Analysis of Masked Language Models
Goro Kobayashi
Tatsuki Kuribayashi
Sho Yokoi
Kentaro Inui
158
46
0
15 Sep 2021
1