Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.01950
Cited By
The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
3 June 2021
Ulme Wennberg
G. Henter
MILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models"
8 / 8 papers shown
Title
Are queries and keys always relevant? A case study on Transformer wave functions
Riccardo Rende
Luciano Loris Viteritti
24
5
0
29 May 2024
Matcha-TTS: A fast TTS architecture with conditional flow matching
Shivam Mehta
Ruibo Tu
Jonas Beskow
Éva Székely
G. Henter
16
69
0
06 Sep 2023
LEA: Improving Sentence Similarity Robustness to Typos Using Lexical Attention Bias
Mario Almagro
Emilio Almazán
Diego Ortego
David Jiménez
21
3
0
06 Jul 2023
Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models
Simon Alexanderson
Rajmund Nagy
Jonas Beskow
G. Henter
DiffM
VGen
22
165
0
17 Nov 2022
KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
Ta-Chung Chi
Ting-Han Fan
Peter J. Ramadge
Alexander I. Rudnicky
39
65
0
20 May 2022
Transflower: probabilistic autoregressive dance generation with multimodal attention
Guillermo Valle Pérez
G. Henter
Jonas Beskow
A. Holzapfel
Pierre-Yves Oudeyer
Simon Alexanderson
19
42
0
25 Jun 2021
Out of Order: How Important Is The Sequential Order of Words in a Sentence in Natural Language Understanding Tasks?
Thang M. Pham
Trung Bui
Long Mai
Anh Totti Nguyen
209
122
0
30 Dec 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,956
0
20 Apr 2018
1