ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.03471
  4. Cited By
What do Neural Machine Translation Models Learn about Morphology?
v1v2v3 (latest)

What do Neural Machine Translation Models Learn about Morphology?

11 April 2017
Yonatan Belinkov
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
James R. Glass
ArXiv (abs)PDFHTML

Papers citing "What do Neural Machine Translation Models Learn about Morphology?"

50 / 251 papers shown
Putting Words in BERT's Mouth: Navigating Contextualized Vector Spaces
  with Pseudowords
Putting Words in BERT's Mouth: Navigating Contextualized Vector Spaces with PseudowordsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Taelin Karidi
Yichu Zhou
Nathan Schneider
Omri Abend
Vivek Srikumar
221
15
0
23 Sep 2021
Survey: Transformer based Video-Language Pre-training
Survey: Transformer based Video-Language Pre-training
Ludan Ruan
Qin Jin
VLMViT
206
49
0
21 Sep 2021
Distilling Linguistic Context for Language Model Compression
Distilling Linguistic Context for Language Model Compression
Geondo Park
Gyeongman Kim
Eunho Yang
172
42
0
17 Sep 2021
Learning Mathematical Properties of Integers
Learning Mathematical Properties of Integers
Maria Ryskina
Kevin Knight
118
6
0
15 Sep 2021
Examining Cross-lingual Contextual Embeddings with Orthogonal Structural
  Probes
Examining Cross-lingual Contextual Embeddings with Orthogonal Structural ProbesConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Tomasz Limisiewicz
David Marevcek
141
3
0
10 Sep 2021
How much pretraining data do language models need to learn syntax?
How much pretraining data do language models need to learn syntax?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Laura Pérez-Mayos
Miguel Ballesteros
Leo Wanner
145
37
0
07 Sep 2021
How Suitable Are Subword Segmentation Strategies for Translating
  Non-Concatenative Morphology?
How Suitable Are Subword Segmentation Strategies for Translating Non-Concatenative Morphology?
Chantal Amrhein
Rico Sennrich
313
14
0
02 Sep 2021
How Does Adversarial Fine-Tuning Benefit BERT?
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
251
5
0
31 Aug 2021
Neuron-level Interpretation of Deep NLP Models: A Survey
Neuron-level Interpretation of Deep NLP Models: A SurveyTransactions of the Association for Computational Linguistics (TACL), 2021
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILMAI4CE
309
95
0
30 Aug 2021
What do pre-trained code models know about code?
What do pre-trained code models know about code?International Conference on Automated Software Engineering (ASE), 2021
Anjan Karmakar
Romain Robbes
ELM
160
110
0
25 Aug 2021
What can Neural Referential Form Selectors Learn?
What can Neural Referential Form Selectors Learn?
Guanyi Chen
F. Same
Kees van Deemter
132
6
0
15 Aug 2021
Grounding Representation Similarity with Statistical Testing
Grounding Representation Similarity with Statistical Testing
Frances Ding
Jean-Stanislas Denain
Jacob Steinhardt
212
32
0
03 Aug 2021
On the Difficulty of Translating Free-Order Case-Marking Languages
On the Difficulty of Translating Free-Order Case-Marking Languages
Arianna Bisazza
Ahmet Üstün
Stephan Sportel
140
11
0
13 Jul 2021
What do End-to-End Speech Models Learn about Speaker, Language and
  Channel Information? A Layer-wise and Neuron-level Analysis
What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis
Shammur A. Chowdhury
Nadir Durrani
Ahmed M. Ali
371
21
0
01 Jul 2021
Memorization and Generalization in Neural Code Intelligence Models
Memorization and Generalization in Neural Code Intelligence Models
Md Rafiqul Islam Rabin
Aftab Hussain
Mohammad Amin Alipour
Vincent J. Hellendoorn
TDI
252
48
0
16 Jun 2021
How transfer learning impacts linguistic knowledge in deep NLP models?
How transfer learning impacts linguistic knowledge in deep NLP models?Findings (Findings), 2021
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
156
53
0
31 May 2021
Computational Morphology with Neural Network Approaches
Computational Morphology with Neural Network Approaches
Ling Liu
AI4CE
226
10
0
19 May 2021
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Fine-grained Interpretation and Causation Analysis in Deep NLP ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021
Hassan Sajjad
Narine Kokhlikyan
Fahim Dalvi
Nadir Durrani
MILM
316
8
0
17 May 2021
How Reliable are Model Diagnostics?
How Reliable are Model Diagnostics?Findings (Findings), 2021
V. Aribandi
Yi Tay
Donald Metzler
155
19
0
12 May 2021
Morph Call: Probing Morphosyntactic Content of Multilingual Transformers
Morph Call: Probing Morphosyntactic Content of Multilingual Transformers
Vladislav Mikhailov
O. Serikov
Ekaterina Artemova
251
10
0
26 Apr 2021
A multilabel approach to morphosyntactic probing
A multilabel approach to morphosyntactic probingConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Naomi Tachikawa Shapiro
Amandalynne Paullada
Shane Steinert-Threlkeld
223
11
0
17 Apr 2021
Effect of Post-processing on Contextualized Word Representations
Effect of Post-processing on Contextualized Word RepresentationsInternational Conference on Computational Linguistics (COLING), 2021
Hassan Sajjad
Firoj Alam
Fahim Dalvi
Nadir Durrani
173
12
0
15 Apr 2021
Factual Probing Is [MASK]: Learning vs. Learning to Recall
Factual Probing Is [MASK]: Learning vs. Learning to RecallNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021
Zexuan Zhong
Dan Friedman
Danqi Chen
334
441
0
12 Apr 2021
Probing BERT in Hyperbolic Spaces
Probing BERT in Hyperbolic SpacesInternational Conference on Learning Representations (ICLR), 2021
Boli Chen
Yao Fu
Guangwei Xu
Pengjun Xie
Chuanqi Tan
Mosha Chen
L. Jing
135
64
0
08 Apr 2021
Explaining a Neural Attention Model for Aspect-Based Sentiment
  Classification Using Diagnostic Classification
Explaining a Neural Attention Model for Aspect-Based Sentiment Classification Using Diagnostic ClassificationACM Symposium on Applied Computing (SAC), 2021
Lisa Meijer
Flavius Frasincar
Maria Mihaela Truşcǎ
76
4
0
29 Mar 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A SurveyACM Computing Surveys (CSUR), 2021
Siwen Luo
Michal Guerquin
S. Han
Josiah Poon
MILM
408
64
0
20 Mar 2021
An empirical analysis of phrase-based and neural machine translation
An empirical analysis of phrase-based and neural machine translation
Hamidreza Ghader
115
1
0
04 Mar 2021
Probing Classifiers: Promises, Shortcomings, and Advances
Probing Classifiers: Promises, Shortcomings, and AdvancesInternational Conference on Computational Logic (ICCL), 2021
Yonatan Belinkov
756
593
0
24 Feb 2021
Evaluating Contextualized Language Models for Hungarian
Evaluating Contextualized Language Models for Hungarian
Judit Ács
Dániel Lévai
D. Nemeskey
András Kornai
92
2
0
22 Feb 2021
HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis
  and Emotion Recognition
HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion RecognitionINFORMS Journal on Data Science (INFORMS J. Data Sci.), 2021
Avihay Chriqui
I. Yahav
221
39
0
03 Feb 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language ModelsTransactions of the Association for Computational Linguistics (TACL), 2021
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
675
439
0
01 Feb 2021
On the Evolution of Syntactic Information Encoded by BERT's
  Contextualized Representations
On the Evolution of Syntactic Information Encoded by BERT's Contextualized RepresentationsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2021
Laura Pérez-Mayos
Roberto Carlini
Miguel Ballesteros
Leo Wanner
196
9
0
27 Jan 2021
Coloring the Black Box: What Synesthesia Tells Us about Character
  Embeddings
Coloring the Black Box: What Synesthesia Tells Us about Character EmbeddingsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2021
Katharina Kann
Mauro M. Monsalve-Mercado
151
2
0
26 Jan 2021
To Understand Representation of Layer-aware Sequence Encoders as
  Multi-order-graph
To Understand Representation of Layer-aware Sequence Encoders as Multi-order-graph
Sufeng Duan
Hai Zhao
MILM
315
0
0
16 Jan 2021
What all do audio transformer models hear? Probing Acoustic
  Representations for Language Delivery and its Structure
What all do audio transformer models hear? Probing Acoustic Representations for Language Delivery and its Structure
Jui Shah
Yaman Kumar Singla
Changyou Chen
R. Shah
375
89
0
02 Jan 2021
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence
  Learning
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence LearningInternational Conference on Learning Representations (ICLR), 2020
Xuebo Liu
Longyue Wang
Yang Li
Liang Ding
Lidia S. Chao
Zhaopeng Tu
AI4CE
201
37
0
29 Dec 2020
Towards a Universal Continuous Knowledge Base
Towards a Universal Continuous Knowledge BaseAI Open (AO), 2020
Gang Chen
Maosong Sun
Yang Liu
225
3
0
25 Dec 2020
diagNNose: A Library for Neural Activation Analysis
diagNNose: A Library for Neural Activation AnalysisBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020
Jaap Jumelet
AI4CE
123
9
0
13 Nov 2020
CxGBERT: BERT meets Construction Grammar
CxGBERT: BERT meets Construction Grammar
Harish Tayyar Madabushi
Laurence Romain
Dagmar Divjak
P. Milin
166
54
0
09 Nov 2020
Understanding Pure Character-Based Neural Machine Translation: The Case
  of Translating Finnish into English
Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English
Gongbo Tang
Rico Sennrich
Joakim Nivre
195
7
0
06 Nov 2020
Understanding Pre-trained BERT for Aspect-based Sentiment Analysis
Understanding Pre-trained BERT for Aspect-based Sentiment AnalysisInternational Conference on Computational Linguistics (COLING), 2020
Hu Xu
Lei Shu
Philip S. Yu
Bing-Quan Liu
SSL
224
49
0
31 Oct 2020
Probing Task-Oriented Dialogue Representation from Language Models
Probing Task-Oriented Dialogue Representation from Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Chien-Sheng Wu
Caiming Xiong
139
20
0
26 Oct 2020
Deep Clustering of Text Representations for Supervision-free Probing of
  Syntax
Deep Clustering of Text Representations for Supervision-free Probing of SyntaxAAAI Conference on Artificial Intelligence (AAAI), 2020
Vikram Gupta
Haoyue Shi
Kevin Gimpel
Mrinmaya Sachan
314
9
0
24 Oct 2020
Analyzing the Source and Target Contributions to Predictions in Neural
  Machine Translation
Analyzing the Source and Target Contributions to Predictions in Neural Machine Translation
Elena Voita
Rico Sennrich
Ivan Titov
291
87
0
21 Oct 2020
Intrinsic Probing through Dimension Selection
Intrinsic Probing through Dimension Selection
Lucas Torroba Hennigen
Adina Williams
Robert Bamler
200
60
0
06 Oct 2020
Analyzing Individual Neurons in Pre-trained Language Models
Analyzing Individual Neurons in Pre-trained Language Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Yonatan Belinkov
MILM
216
119
0
06 Oct 2020
On the Sub-Layer Functionalities of Transformer Decoder
On the Sub-Layer Functionalities of Transformer DecoderFindings (Findings), 2020
Yilin Yang
Longyue Wang
Shuming Shi
Prasad Tadepalli
Stefan Lee
Zhaopeng Tu
230
28
0
06 Oct 2020
On the Interplay Between Fine-tuning and Sentence-level Probing for
  Linguistic Knowledge in Pre-trained Transformers
On the Interplay Between Fine-tuning and Sentence-level Probing for Linguistic Knowledge in Pre-trained TransformersBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020
Marius Mosbach
A. Khokhlova
Michael A. Hedderich
Dietrich Klakow
169
51
0
06 Oct 2020
Syntax Representation in Word Embeddings and Neural Networks -- A Survey
Syntax Representation in Word Embeddings and Neural Networks -- A SurveyConference on Theory and Practice of Information Technologies (TPIT), 2020
Tomasz Limisiewicz
David Marecek
NAI
188
9
0
02 Oct 2020
Examining the rhetorical capacities of neural language models
Examining the rhetorical capacities of neural language modelsBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020
Zining Zhu
Chuer Pan
Mohamed Abdalla
Frank Rudzicz
268
10
0
01 Oct 2020
Previous
123456
Next
Page 3 of 6