ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.15180
  4. Cited By
Out of Order: How Important Is The Sequential Order of Words in a
  Sentence in Natural Language Understanding Tasks?
v1v2 (latest)

Out of Order: How Important Is The Sequential Order of Words in a Sentence in Natural Language Understanding Tasks?

Findings (Findings), 2020
30 December 2020
Thang M. Pham
Trung Bui
Long Mai
Anh Totti Nguyen
ArXiv (abs)PDFHTML

Papers citing "Out of Order: How Important Is The Sequential Order of Words in a Sentence in Natural Language Understanding Tasks?"

20 / 70 papers shown
Title
Schrödinger's Tree -- On Syntax and Neural Language Models
Schrödinger's Tree -- On Syntax and Neural Language Models
Artur Kulmizev
Joakim Nivre
126
6
0
17 Oct 2021
Structural Persistence in Language Models: Priming as a Window into
  Abstract Language Representations
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
Arabella J. Sinclair
Jaap Jumelet
Willem H. Zuidema
Raquel Fernández
203
46
0
30 Sep 2021
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with
  Controllable Perturbations
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations
Ekaterina Taktasheva
Vladislav Mikhailov
Ekaterina Artemova
211
14
0
28 Sep 2021
AES Systems Are Both Overstable And Oversensitive: Explaining Why And
  Proposing Defenses
AES Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing DefensesDialogue and Discourse (DD), 2021
Yaman Kumar Singla
Swapnil Parekh
Somesh Singh
Junjie Li
R. Shah
Changyou Chen
AAML
195
16
0
24 Sep 2021
Numerical reasoning in machine reading comprehension tasks: are we there
  yet?
Numerical reasoning in machine reading comprehension tasks: are we there yet?
Hadeel Al-Negheimish
Pranava Madhyastha
A. Russo
AIMatReLM
138
14
0
16 Sep 2021
Studying word order through iterative shuffling
Studying word order through iterative shufflingConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Nikolay Malkin
Sameera Lanka
Pranav Goel
Nebojsa Jojic
120
14
0
10 Sep 2021
BERT might be Overkill: A Tiny but Effective Biomedical Entity Linker
  based on Residual Convolutional Neural Networks
BERT might be Overkill: A Tiny but Effective Biomedical Entity Linker based on Residual Convolutional Neural Networks
T. Lai
Heng Ji
ChengXiang Zhai
138
31
0
06 Sep 2021
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Albert Webson
Ellie Pavlick
LRM
348
420
0
02 Sep 2021
How Does Adversarial Fine-Tuning Benefit BERT?
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
197
5
0
31 Aug 2021
Local Structure Matters Most: Perturbation Study in NLU
Local Structure Matters Most: Perturbation Study in NLUFindings (Findings), 2021
Louis Clouâtre
Prasanna Parthasarathi
Payel Das
Sarath Chandar
181
16
0
29 Jul 2021
What Context Features Can Transformer Language Models Use?
What Context Features Can Transformer Language Models Use?
J. O'Connor
Jacob Andreas
KELM
156
81
0
15 Jun 2021
The Case for Translation-Invariant Self-Attention in Transformer-Based
  Language Models
The Case for Translation-Invariant Self-Attention in Transformer-Based Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Ulme Wennberg
G. Henter
MILM
173
25
0
03 Jun 2021
Towards mental time travel: a hierarchical memory for reinforcement
  learning agents
Towards mental time travel: a hierarchical memory for reinforcement learning agentsNeural Information Processing Systems (NeurIPS), 2021
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Andrea Banino
Felix Hill
261
58
0
28 May 2021
Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained
  Models
Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained ModelsWorkshop on Representation Learning for NLP (RepL4NLP), 2021
Zhengxuan Wu
Nelson F. Liu
Christopher Potts
98
5
0
17 Apr 2021
Sometimes We Want Translationese
Sometimes We Want Translationese
Prasanna Parthasarathi
Koustuv Sinha
J. Pineau
Adina Williams
AAML
201
4
0
15 Apr 2021
Syntactic Perturbations Reveal Representational Correlates of
  Hierarchical Phrase Structure in Pretrained Language Models
Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language ModelsWorkshop on Representation Learning for NLP (RepL4NLP), 2021
Matteo Alleman
J. Mamou
Miguel Rio
Hanlin Tang
Yoon Kim
SueYeon Chung
NAI
170
18
0
15 Apr 2021
Masked Language Modeling and the Distributional Hypothesis: Order Word
  Matters Pre-training for Little
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for LittleConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Koustuv Sinha
Robin Jia
Dieuwke Hupkes
J. Pineau
Adina Williams
Douwe Kiela
216
271
0
14 Apr 2021
NLI Data Sanity Check: Assessing the Effect of Data Corruption on Model
  Performance
NLI Data Sanity Check: Assessing the Effect of Data Corruption on Model PerformanceNordic Conference of Computational Linguistics (NoDaLiDa), 2021
Aarne Talman
Marianna Apidianaki
S. Chatzikyriakidis
Jörg Tiedemann
126
11
0
10 Apr 2021
Interpretable bias mitigation for textual data: Reducing gender bias in
  patient notes while maintaining classification performance
Interpretable bias mitigation for textual data: Reducing gender bias in patient notes while maintaining classification performance
J. Minot
N. Cheney
Marc E. Maier
Danne C. Elbers
C. Danforth
P. Dodds
FaML
86
4
0
10 Mar 2021
Rissanen Data Analysis: Examining Dataset Characteristics via
  Description Length
Rissanen Data Analysis: Examining Dataset Characteristics via Description LengthInternational Conference on Machine Learning (ICML), 2021
Ethan Perez
Douwe Kiela
Dong Wang
178
25
0
05 Mar 2021
Previous
12