Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1704.03471
Cited By
What do Neural Machine Translation Models Learn about Morphology?
11 April 2017
Yonatan Belinkov
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
James R. Glass
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What do Neural Machine Translation Models Learn about Morphology?"
50 / 242 papers shown
Title
Intent Recognition in Conversational Recommender Systems
Sahar Moradizeyveh
35
5
0
06 Dec 2022
Localization vs. Semantics: Visual Representations in Unimodal and Multimodal Models
Zhuowan Li
Cihang Xie
Benjamin Van Durme
Alan L. Yuille
VLM
SSL
20
2
0
01 Dec 2022
Transferability Estimation Based On Principal Gradient Expectation
Huiyan Qi
Lechao Cheng
Jingjing Chen
Yue Yu
Xue Song
Zunlei Feng
Yueping Jiang
14
2
0
29 Nov 2022
ConceptX: A Framework for Latent Concept Analysis
Firoj Alam
Fahim Dalvi
Nadir Durrani
Hassan Sajjad
A. Khan
Jia Xu
20
5
0
12 Nov 2022
The Architectural Bottleneck Principle
Tiago Pimentel
Josef Valvoda
Niklas Stoehr
Ryan Cotterell
25
5
0
11 Nov 2022
Impact of Adversarial Training on Robustness and Generalizability of Language Models
Enes Altinisik
Hassan Sajjad
H. Sencar
Safa Messaoud
Sanjay Chawla
AAML
11
8
0
10 Nov 2022
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BigScience Workshop
:
Teven Le Scao
Angela Fan
Christopher Akiki
...
Zhongli Xie
Zifan Ye
M. Bras
Younes Belkada
Thomas Wolf
VLM
89
2,301
0
09 Nov 2022
Do Charge Prediction Models Learn Legal Theory?
Zhenwei An
Quzhe Huang
Cong Jiang
Yansong Feng
Dongyan Zhao
ELM
AILaw
27
6
0
31 Oct 2022
The Better Your Syntax, the Better Your Semantics? Probing Pretrained Language Models for the English Comparative Correlative
Leonie Weissweiler
Valentin Hofmann
Abdullatif Köksal
Hinrich Schütze
27
31
0
24 Oct 2022
On the Transformation of Latent Space in Fine-Tuned NLP Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Firoj Alam
29
18
0
23 Oct 2022
Understanding Domain Learning in Language Models Through Subpopulation Analysis
Zheng Zhao
Yftah Ziser
Shay B. Cohen
32
6
0
22 Oct 2022
Probing with Noise: Unpicking the Warp and Weft of Embeddings
Filip Klubicka
John D. Kelleher
24
4
0
21 Oct 2022
Post-hoc analysis of Arabic transformer models
Ahmed Abdelali
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
10
1
0
18 Oct 2022
Measures of Information Reflect Memorization Patterns
Rachit Bansal
Danish Pruthi
Yonatan Belinkov
25
8
0
17 Oct 2022
Predicting Fine-Tuning Performance with Probing
Zining Zhu
Soroosh Shahtalebi
Frank Rudzicz
26
9
0
13 Oct 2022
Assessing Neural Referential Form Selectors on a Realistic Multilingual Dataset
Guanyi Chen
F. Same
Kees van Deemter
13
0
0
10 Oct 2022
Survey: Exploiting Data Redundancy for Optimization of Deep Learning
Jou-An Chen
Wei Niu
Bin Ren
Yanzhi Wang
Xipeng Shen
21
24
0
29 Aug 2022
Proton: Probing Schema Linking Information from Pre-trained Language Models for Text-to-SQL Parsing
Lihan Wang
Bowen Qin
Binyuan Hui
Bowen Li
Min Yang
Bailin Wang
Binhua Li
Fei Huang
Luo Si
Yongbin Li
66
40
0
28 Jun 2022
Analyzing Encoded Concepts in Transformer Language Models
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
Firoj Alam
A. Khan
Jia Xu
8
40
0
27 Jun 2022
Discovering Salient Neurons in Deep NLP Models
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
KELM
MILM
14
15
0
27 Jun 2022
AST-Probe: Recovering abstract syntax trees from hidden representations of pre-trained language models
José Antonio Hernández López
M. Weyssow
Jesús Sánchez Cuadrado
H. Sahraoui
22
22
0
23 Jun 2022
BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning
Xiao Xu
Chenfei Wu
Shachar Rosenman
Vasudev Lal
Wanxiang Che
Nan Duan
43
64
0
17 Jun 2022
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of NLP Models
Kaiji Lu
Anupam Datta
13
0
0
01 Jun 2022
Improving VAE-based Representation Learning
Mingtian Zhang
Tim Z. Xiao
Brooks Paige
David Barber
SSL
DRL
10
9
0
28 May 2022
DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages
Gabriele Sarti
Arianna Bisazza
Ana Guerberof Arenas
Antonio Toral
36
7
0
24 May 2022
Discovering Latent Concepts Learned in BERT
Fahim Dalvi
A. Khan
Firoj Alam
Nadir Durrani
Jia Xu
Hassan Sajjad
SSL
11
56
0
15 May 2022
Implicit N-grams Induced by Recurrence
Xiaobing Sun
Wei Lu
19
3
0
05 May 2022
Systematicity, Compositionality and Transitivity of Deep NLP Models: a Metamorphic Testing Perspective
Edoardo Manino
Julia Rozanova
Danilo S. Carvalho
André Freitas
Lucas C. Cordeiro
22
7
0
26 Apr 2022
It Takes Two Flints to Make a Fire: Multitask Learning of Neural Relation and Explanation Classifiers
Zheng Tang
Mihai Surdeanu
13
6
0
25 Apr 2022
Probing Script Knowledge from Pre-Trained Models
Zijian Jin
Xingyu Zhang
Mo Yu
Lifu Huang
10
4
0
16 Apr 2022
Interpretation of Black Box NLP Models: A Survey
Shivani Choudhary
N. Chatterjee
S. K. Saha
FAtt
28
10
0
31 Mar 2022
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Ananya Kumar
Aditi Raghunathan
Robbie Jones
Tengyu Ma
Percy Liang
OODD
39
640
0
21 Feb 2022
Evaluating the Construct Validity of Text Embeddings with Application to Survey Questions
Qixiang Fang
D. Nguyen
Daniel L. Oberski
19
12
0
18 Feb 2022
Locating and Editing Factual Associations in GPT
Kevin Meng
David Bau
A. Andonian
Yonatan Belinkov
KELM
32
1,182
0
10 Feb 2022
Table Pre-training: A Survey on Model Architectures, Pre-training Objectives, and Downstream Tasks
Haoyu Dong
Zhoujun Cheng
Xinyi He
Mengyuan Zhou
Anda Zhou
Fan Zhou
Ao Liu
Shi Han
Dongmei Zhang
LMTD
60
64
0
24 Jan 2022
Interpreting Arabic Transformer Models
Ahmed Abdelali
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
22
2
0
19 Jan 2022
Representation Alignment in Neural Networks
Ehsan Imani
Wei Hu
Martha White
17
3
0
15 Dec 2021
Variation and generality in encoding of syntactic anomaly information in sentence embeddings
Qinxuan Wu
Allyson Ettinger
12
2
0
12 Nov 2021
Fast Model Editing at Scale
E. Mitchell
Charles Lin
Antoine Bosselut
Chelsea Finn
Christopher D. Manning
KELM
219
341
0
21 Oct 2021
SlovakBERT: Slovak Masked Language Model
Matúš Pikuliak
Stefan Grivalsky
Martin Konopka
Miroslav Blšták
Martin Tamajka
Viktor Bachratý
Marián Simko
Pavol Balázik
Michal Trnka
Filip Uhlárik
27
25
0
30 Sep 2021
On the Prunability of Attention Heads in Multilingual BERT
Aakriti Budhraja
Madhura Pande
Pratyush Kumar
Mitesh M. Khapra
42
4
0
26 Sep 2021
Putting Words in BERT's Mouth: Navigating Contextualized Vector Spaces with Pseudowords
Taelin Karidi
Yichu Zhou
Nathan Schneider
Omri Abend
Vivek Srikumar
78
13
0
23 Sep 2021
Survey: Transformer based Video-Language Pre-training
Ludan Ruan
Qin Jin
VLM
ViT
64
44
0
21 Sep 2021
Distilling Linguistic Context for Language Model Compression
Geondo Park
Gyeongman Kim
Eunho Yang
45
37
0
17 Sep 2021
Learning Mathematical Properties of Integers
Maria Ryskina
Kevin Knight
18
6
0
15 Sep 2021
Examining Cross-lingual Contextual Embeddings with Orthogonal Structural Probes
Tomasz Limisiewicz
David Marevcek
6
3
0
10 Sep 2021
How much pretraining data do language models need to learn syntax?
Laura Pérez-Mayos
Miguel Ballesteros
Leo Wanner
6
32
0
07 Sep 2021
How Suitable Are Subword Segmentation Strategies for Translating Non-Concatenative Morphology?
Chantal Amrhein
Rico Sennrich
22
13
0
02 Sep 2021
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
18
4
0
31 Aug 2021
Neuron-level Interpretation of Deep NLP Models: A Survey
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILM
AI4CE
22
79
0
30 Aug 2021
Previous
1
2
3
4
5
Next