Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.03686
Cited By
DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
5 October 2023
Anna Langedijk
Hosein Mohebbi
Gabriele Sarti
Willem H. Zuidema
Jaap Jumelet
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers"
5 / 5 papers shown
Title
Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models
Asma Ghandeharioun
Avi Caciularu
Adam Pearce
Lucas Dixon
Mor Geva
14
86
0
11 Jan 2024
Quantifying Context Mixing in Transformers
Hosein Mohebbi
Willem H. Zuidema
Grzegorz Chrupała
A. Alishahi
164
24
0
30 Jan 2023
Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
Kevin Wang
Alexandre Variengien
Arthur Conmy
Buck Shlegeris
Jacob Steinhardt
210
486
0
01 Nov 2022
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
101
105
0
22 Sep 2022
Stanza: A Python Natural Language Processing Toolkit for Many Human Languages
Peng Qi
Yuhao Zhang
Yuhui Zhang
Jason Bolton
Christopher D. Manning
AI4TS
184
1,638
0
16 Mar 2020
1