Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.07647
Cited By
Finding Experts in Transformer Models
15 May 2020
Xavier Suau
Luca Zappella
N. Apostoloff
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Finding Experts in Transformer Models"
6 / 6 papers shown
Title
Finding Skill Neurons in Pre-trained Transformer-based Language Models
Xiaozhi Wang
Kaiyue Wen
Zhengyan Zhang
Lei Hou
Zhiyuan Liu
Juanzi Li
MILM
MoE
19
50
0
14 Nov 2022
Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Zijie J. Wang
Alex Kale
Harsha Nori
P. Stella
M. Nunnally
Duen Horng Chau
Mihaela Vorvoreanu
J. W. Vaughan
R. Caruana
KELM
54
27
0
30 Jun 2022
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
Zhengyan Zhang
Yankai Lin
Zhiyuan Liu
Peng Li
Maosong Sun
Jie Zhou
MoE
19
115
0
05 Oct 2021
Neuron-level Interpretation of Deep NLP Models: A Survey
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILM
AI4CE
22
79
0
30 Aug 2021
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
199
882
0
03 May 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
1