ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.00317
  4. Cited By
On the Linguistic Representational Power of Neural Machine Translation
  Models

On the Linguistic Representational Power of Neural Machine Translation Models

1 November 2019
Yonatan Belinkov
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
James R. Glass
    MILM
ArXivPDFHTML

Papers citing "On the Linguistic Representational Power of Neural Machine Translation Models"

39 / 39 papers shown
Title
Linking forward-pass dynamics in Transformers and real-time human processing
Linking forward-pass dynamics in Transformers and real-time human processing
Jennifer Hu
Michael A. Lepori
Michael Franke
AI4CE
113
0
0
18 Apr 2025
Tokenization and Morphology in Multilingual Language Models: A
  Comparative Analysis of mT5 and ByT5
Tokenization and Morphology in Multilingual Language Models: A Comparative Analysis of mT5 and ByT5
Thao Anh Dang
Limor Raviv
Lukas Galke
23
1
0
15 Oct 2024
Investigating OCR-Sensitive Neurons to Improve Entity Recognition in
  Historical Documents
Investigating OCR-Sensitive Neurons to Improve Entity Recognition in Historical Documents
Emanuela Boros
Maud Ehrmann
31
0
0
25 Sep 2024
Monitoring Latent World States in Language Models with Propositional
  Probes
Monitoring Latent World States in Language Models with Propositional Probes
Jiahai Feng
Stuart Russell
Jacob Steinhardt
HILM
32
6
0
27 Jun 2024
Layer-wise Representation Fusion for Compositional Generalization
Layer-wise Representation Fusion for Compositional Generalization
Yafang Zheng
Lei Lin
Shantao Liu
Binling Wang
Zhaohong Lai
Wenhao Rao
Biao Fu
Yidong Chen
Xiaodon Shi
AI4CE
30
2
0
20 Jul 2023
Can LLMs facilitate interpretation of pre-trained language models?
Can LLMs facilitate interpretation of pre-trained language models?
Basel Mousi
Nadir Durrani
Fahim Dalvi
36
12
0
22 May 2023
The Interpreter Understands Your Meaning: End-to-end Spoken Language
  Understanding Aided by Speech Translation
The Interpreter Understands Your Meaning: End-to-end Spoken Language Understanding Aided by Speech Translation
Mutian He
Philip N. Garner
36
4
0
16 May 2023
Explaining Language Models' Predictions with High-Impact Concepts
Explaining Language Models' Predictions with High-Impact Concepts
Ruochen Zhao
Shafiq R. Joty
Yongjie Wang
Tan Wang
LRM
63
8
0
03 May 2023
NxPlain: Web-based Tool for Discovery of Latent Concepts
NxPlain: Web-based Tool for Discovery of Latent Concepts
Fahim Dalvi
Nadir Durrani
Hassan Sajjad
Tamim Jaban
Musab Husaini
Ummar Abbas
13
1
0
06 Mar 2023
Interpretability in Activation Space Analysis of Transformers: A Focused
  Survey
Interpretability in Activation Space Analysis of Transformers: A Focused Survey
Soniya Vijayakumar
AI4CE
27
3
0
22 Jan 2023
On the Transformation of Latent Space in Fine-Tuned NLP Models
On the Transformation of Latent Space in Fine-Tuned NLP Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Firoj Alam
27
18
0
23 Oct 2022
Post-hoc analysis of Arabic transformer models
Post-hoc analysis of Arabic transformer models
Ahmed Abdelali
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
10
1
0
18 Oct 2022
Lost in Context? On the Sense-wise Variance of Contextualized Word
  Embeddings
Lost in Context? On the Sense-wise Variance of Contextualized Word Embeddings
Yile Wang
Yue Zhang
11
4
0
20 Aug 2022
Analyzing Encoded Concepts in Transformer Language Models
Analyzing Encoded Concepts in Transformer Language Models
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
Firoj Alam
A. Khan
Jia Xu
8
40
0
27 Jun 2022
Discovering Salient Neurons in Deep NLP Models
Discovering Salient Neurons in Deep NLP Models
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
KELM
MILM
14
15
0
27 Jun 2022
Discovering Latent Concepts Learned in BERT
Discovering Latent Concepts Learned in BERT
Fahim Dalvi
A. Khan
Firoj Alam
Nadir Durrani
Jia Xu
Hassan Sajjad
SSL
11
56
0
15 May 2022
Probing for Constituency Structure in Neural Language Models
Probing for Constituency Structure in Neural Language Models
David Arps
Younes Samih
Laura Kallmeyer
Hassan Sajjad
19
12
0
13 Apr 2022
Under the Morphosyntactic Lens: A Multifaceted Evaluation of Gender Bias
  in Speech Translation
Under the Morphosyntactic Lens: A Multifaceted Evaluation of Gender Bias in Speech Translation
Beatrice Savoldi
Marco Gaido
L. Bentivogli
Matteo Negri
Marco Turchi
38
26
0
18 Mar 2022
Screening Gender Transfer in Neural Machine Translation
Screening Gender Transfer in Neural Machine Translation
Guillaume Wisniewski
Lichao Zhu
Nicolas Bailler
François Yvon
6
4
0
25 Feb 2022
Probing Pretrained Models of Source Code
Probing Pretrained Models of Source Code
Sergey Troshin
Nadezhda Chirkova
ELM
25
38
0
16 Feb 2022
How Suitable Are Subword Segmentation Strategies for Translating
  Non-Concatenative Morphology?
How Suitable Are Subword Segmentation Strategies for Translating Non-Concatenative Morphology?
Chantal Amrhein
Rico Sennrich
22
13
0
02 Sep 2021
Neuron-level Interpretation of Deep NLP Models: A Survey
Neuron-level Interpretation of Deep NLP Models: A Survey
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILM
AI4CE
22
79
0
30 Aug 2021
How transfer learning impacts linguistic knowledge in deep NLP models?
How transfer learning impacts linguistic knowledge in deep NLP models?
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
13
48
0
31 May 2021
How to Split: the Effect of Word Segmentation on Gender Bias in Speech
  Translation
How to Split: the Effect of Word Segmentation on Gender Bias in Speech Translation
Marco Gaido
Beatrice Savoldi
L. Bentivogli
Matteo Negri
Marco Turchi
54
15
0
28 May 2021
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Hassan Sajjad
Narine Kokhlikyan
Fahim Dalvi
Nadir Durrani
MILM
17
8
0
17 May 2021
Searchable Hidden Intermediates for End-to-End Models of Decomposable
  Sequence Tasks
Searchable Hidden Intermediates for End-to-End Models of Decomposable Sequence Tasks
Siddharth Dalmia
Brian Yan
Vikas Raunak
Florian Metze
Shinji Watanabe
37
30
0
02 May 2021
Effect of Post-processing on Contextualized Word Representations
Effect of Post-processing on Contextualized Word Representations
Hassan Sajjad
Firoj Alam
Fahim Dalvi
Nadir Durrani
6
9
0
15 Apr 2021
Mediators in Determining what Processing BERT Performs First
Mediators in Determining what Processing BERT Performs First
Aviv Slobodkin
Leshem Choshen
Omri Abend
MoE
52
15
0
13 Apr 2021
Gender Bias in Machine Translation
Gender Bias in Machine Translation
Beatrice Savoldi
Marco Gaido
L. Bentivogli
Matteo Negri
Marco Turchi
48
191
0
13 Apr 2021
What's the best place for an AI conference, Vancouver or ______: Why
  completing comparative questions is difficult
What's the best place for an AI conference, Vancouver or ______: Why completing comparative questions is difficult
Avishai Zagoury
Einat Minkov
Idan Szpektor
William W. Cohen
ELM
22
6
0
05 Apr 2021
Probing Classifiers: Promises, Shortcomings, and Advances
Probing Classifiers: Promises, Shortcomings, and Advances
Yonatan Belinkov
224
404
0
24 Feb 2021
Infusing Finetuning with Semantic Dependencies
Infusing Finetuning with Semantic Dependencies
Zhaofeng Wu
Hao Peng
Noah A. Smith
17
36
0
10 Dec 2020
Understanding Pure Character-Based Neural Machine Translation: The Case
  of Translating Finnish into English
Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English
Gongbo Tang
Rico Sennrich
Joakim Nivre
12
7
0
06 Nov 2020
Analyzing Individual Neurons in Pre-trained Language Models
Analyzing Individual Neurons in Pre-trained Language Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Yonatan Belinkov
MILM
4
104
0
06 Oct 2020
Dissecting Lottery Ticket Transformers: Structural and Behavioral Study
  of Sparse Neural Machine Translation
Dissecting Lottery Ticket Transformers: Structural and Behavioral Study of Sparse Neural Machine Translation
Rajiv Movva
Jason Zhao
10
12
0
17 Sep 2020
On the Effect of Dropping Layers of Pre-trained Transformer Models
On the Effect of Dropping Layers of Pre-trained Transformer Models
Hassan Sajjad
Fahim Dalvi
Nadir Durrani
Preslav Nakov
23
131
0
08 Apr 2020
What you can cram into a single vector: Probing sentence embeddings for
  linguistic properties
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
199
882
0
03 May 2018
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,740
0
26 Sep 2016
Neural versus Phrase-Based Machine Translation Quality: a Case Study
Neural versus Phrase-Based Machine Translation Quality: a Case Study
L. Bentivogli
Arianna Bisazza
Mauro Cettolo
Marcello Federico
191
328
0
16 Aug 2016
1