Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.15301
Cited By
Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation
30 May 2022
Verna Dankers
Christopher G. Lucas
Ivan Titov
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation"
7 / 7 papers shown
Title
Geometric Signatures of Compositionality Across a Language Model's Lifetime
Jin Hwa Lee
Thomas Jiralerspong
Lei Yu
Yoshua Bengio
Emily Cheng
CoGe
75
0
0
02 Oct 2024
Divergences between Language Models and Human Brains
Yuchen Zhou
Emmy Liu
Graham Neubig
Michael J. Tarr
Leila Wehbe
16
1
0
15 Nov 2023
LEACE: Perfect linear concept erasure in closed form
Nora Belrose
David Schneider-Joseph
Shauli Ravfogel
Ryan Cotterell
Edward Raff
Stella Biderman
KELM
MU
39
102
0
06 Jun 2023
The paradox of the compositionality of natural language: a neural machine translation case study
Verna Dankers
Elia Bruni
Dieuwke Hupkes
CoGe
152
75
0
12 Aug 2021
On Compositional Generalization of Neural Machine Translation
Yafu Li
Yongjing Yin
Yulong Chen
Yue Zhang
145
44
0
31 May 2021
The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives
Elena Voita
Rico Sennrich
Ivan Titov
179
181
0
03 Sep 2019
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
196
876
0
03 May 2018
1