Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.13138
Cited By
Neuron-level Interpretation of Deep NLP Models: A Survey
30 August 2021
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILM
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Neuron-level Interpretation of Deep NLP Models: A Survey"
15 / 65 papers shown
Title
Post-hoc analysis of Arabic transformer models
Ahmed Abdelali
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
8
1
0
18 Oct 2022
The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in Transformers
Zong-xiao Li
Chong You
Srinadh Bhojanapalli
Daliang Li
A. S. Rawat
...
Kenneth Q Ye
Felix Chern
Felix X. Yu
Ruiqi Guo
Surinder Kumar
MoE
25
87
0
12 Oct 2022
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAML
AI4CE
18
123
0
27 Jul 2022
IDANI: Inference-time Domain Adaptation via Neuron-level Interventions
Omer Antverg
Eyal Ben-David
Yonatan Belinkov
OOD
AI4CE
11
5
0
01 Jun 2022
Discovering Latent Concepts Learned in BERT
Fahim Dalvi
A. Khan
Firoj Alam
Nadir Durrani
Jia Xu
Hassan Sajjad
SSL
11
56
0
15 May 2022
Probing for Constituency Structure in Neural Language Models
David Arps
Younes Samih
Laura Kallmeyer
Hassan Sajjad
16
12
0
13 Apr 2022
Sparse Interventions in Language Models with Differentiable Masking
Nicola De Cao
Leon Schmid
Dieuwke Hupkes
Ivan Titov
17
26
0
13 Dec 2021
On Neurons Invariant to Sentence Structural Changes in Neural Machine Translation
Gal Patel
Leshem Choshen
Omri Abend
20
2
0
06 Oct 2021
What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis
Shammur A. Chowdhury
Nadir Durrani
Ahmed M. Ali
25
12
0
01 Jul 2021
Effect of Post-processing on Contextualized Word Representations
Hassan Sajjad
Firoj Alam
Fahim Dalvi
Nadir Durrani
6
9
0
15 Apr 2021
Similarity Analysis of Contextual Word Representation Models
John M. Wu
Yonatan Belinkov
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
James R. Glass
46
73
0
03 May 2020
On the Effect of Dropping Layers of Pre-trained Transformer Models
Hassan Sajjad
Fahim Dalvi
Nadir Durrani
Preslav Nakov
15
130
0
08 Apr 2020
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
199
876
0
03 May 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
Efficient Estimation of Word Representations in Vector Space
Tomáš Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
228
31,150
0
16 Jan 2013
Previous
1
2