ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.10995
  4. Cited By
Word Order Does Matter (And Shuffled Language Models Know It)

Word Order Does Matter (And Shuffled Language Models Know It)

Annual Meeting of the Association for Computational Linguistics (ACL), 2022
21 March 2022
Vinit Ravishankar
Mostafa Abdou
Artur Kulmizev
Anders Søgaard
ArXiv (abs)PDFHTML

Papers citing "Word Order Does Matter (And Shuffled Language Models Know It)"

31 / 31 papers shown
Title
National Institute on Aging PREPARE Challenge: Early Detection of Cognitive Impairment Using Speech -- The SpeechCARE Solution
National Institute on Aging PREPARE Challenge: Early Detection of Cognitive Impairment Using Speech -- The SpeechCARE SolutionIEEE Robotics and Automation Letters (IEEE RA-L), 2025
Maryam Zolnoori
Hossein Azadmaleki
Yasaman Haghbin
Ali Zolnour
Mohammad Javad Momeni Nezhad
Sina Rashidi
Mehdi Naserian
Elyas Esmaeili
Sepehr Karimi Arpanahi
120
0
0
11 Nov 2025
Biasless Language Models Learn Unnaturally: How LLMs Fail to Distinguish the Possible from the Impossible
Biasless Language Models Learn Unnaturally: How LLMs Fail to Distinguish the Possible from the Impossible
Imry Ziv
Nur Lan
Emmanuel Chemla
Roni Katzir
93
0
0
08 Oct 2025
Trick or Neat: Adversarial Ambiguity and Language Model Evaluation
Trick or Neat: Adversarial Ambiguity and Language Model EvaluationAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Antonia Karamolegkou
Oliver Eberle
Phillip Rust
Carina Kauf
Anders Søgaard
151
2
0
01 Jun 2025
Order Doesn't Matter, But Reasoning Does: Training LLMs with Order-Centric Augmentation
Order Doesn't Matter, But Reasoning Does: Training LLMs with Order-Centric Augmentation
Qianxi He
Qianyu He
Jiaqing Liang
Yanghua Xiao
Weikang Zhou
Zeye Sun
Fei Yu
LRM
371
0
0
27 Feb 2025
Refining Packing and Shuffling Strategies for Enhanced Performance in
  Generative Language Models
Refining Packing and Shuffling Strategies for Enhanced Performance in Generative Language Models
Yanbing Chen
Ruilin Wang
Zihao Yang
L. Jiang
E. Oermann
KELM
181
0
0
19 Aug 2024
Does Incomplete Syntax Influence Korean Language Model? Focusing on Word
  Order and Case Markers
Does Incomplete Syntax Influence Korean Language Model? Focusing on Word Order and Case Markers
Jong Myoung Kim
Young-Jun Lee
Yong-jin Han
Sangkeun Jung
Ho-Jin Choi
198
3
0
12 Jul 2024
Black Big Boxes: Do Language Models Hide a Theory of Adjective Order?
Black Big Boxes: Do Language Models Hide a Theory of Adjective Order?
Jaap Jumelet
Lisa Bylinina
Willem H. Zuidema
Jakub Szymanik
243
5
0
02 Jul 2024
Tokenization Falling Short: The Curse of Tokenization
Tokenization Falling Short: The Curse of Tokenization
Yekun Chai
Yewei Fang
Qiwei Peng
Xuhong Li
197
0
0
17 Jun 2024
Analyzing Semantic Change through Lexical Replacements
Analyzing Semantic Change through Lexical Replacements
Francesco Periti
Pierluigi Cassotti
Haim Dubossarsky
Nina Tahmasebi
165
15
0
29 Apr 2024
Topic Aware Probing: From Sentence Length Prediction to Idiom
  Identification how reliant are Neural Language Models on Topic?
Topic Aware Probing: From Sentence Length Prediction to Idiom Identification how reliant are Neural Language Models on Topic?
Vasudevan Nedumpozhimana
John D. Kelleher
171
2
0
04 Mar 2024
Mitigating Reversal Curse in Large Language Models via Semantic-aware
  Permutation Training
Mitigating Reversal Curse in Large Language Models via Semantic-aware Permutation Training
Qingyan Guo
Rui Wang
Junliang Guo
Xu Tan
Jiang Bian
Yujiu Yang
LRM
260
11
0
01 Mar 2024
Word Order and World Knowledge
Word Order and World Knowledge
Qinghua Zhao
Vinit Ravishankar
Nicolas Garneau
Anders Søgaard
118
0
0
01 Mar 2024
Premise Order Matters in Reasoning with Large Language Models
Premise Order Matters in Reasoning with Large Language Models
Xinyun Chen
Ryan A. Chi
Xuezhi Wang
Denny Zhou
ReLMLRM
442
47
0
14 Feb 2024
Mission: Impossible Language Models
Mission: Impossible Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2024
Julie Kallini
Isabel Papadimitriou
Richard Futrell
Kyle Mahowald
Christopher Potts
ELMLRM
299
30
0
12 Jan 2024
Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural
  Scrambled Text
Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled TextConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Qi Cao
Takeshi Kojima
Yutaka Matsuo
Yusuke Iwasawa
287
27
0
30 Nov 2023
The Locality and Symmetry of Positional Encodings
The Locality and Symmetry of Positional Encodings
Lihu Chen
Gaël Varoquaux
Fabian M. Suchanek
166
1
0
19 Oct 2023
ContextRef: Evaluating Referenceless Metrics For Image Description
  Generation
ContextRef: Evaluating Referenceless Metrics For Image Description GenerationInternational Conference on Learning Representations (ICLR), 2023
Elisa Kreiss
E. Zelikman
Christopher Potts
Nick Haber
225
5
0
21 Sep 2023
CLEVA: Chinese Language Models EVAluation Platform
CLEVA: Chinese Language Models EVAluation PlatformConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Yanyang Li
Jianqiao Zhao
Duo Zheng
Zi-Yuan Hu
Zhi Chen
...
Yongfeng Huang
Shijia Huang
Dahua Lin
Michael R. Lyu
Liwei Wang
ALMELM
295
15
0
09 Aug 2023
Why Do We Need Neuro-symbolic AI to Model Pragmatic Analogies?
Why Do We Need Neuro-symbolic AI to Model Pragmatic Analogies?IEEE Intelligent Systems (IEEE Intell. Syst.), 2023
Thilini Wijesiriwardene
Amit P. Sheth
V. Shalin
Amitava Das
131
5
0
02 Aug 2023
Does Character-level Information Always Improve DRS-based Semantic
  Parsing?
Does Character-level Information Always Improve DRS-based Semantic Parsing?
Tomoya Kurosawa
Hitomi Yanaka
145
0
0
04 Jun 2023
Debiasing should be Good and Bad: Measuring the Consistency of Debiasing
  Techniques in Language Models
Debiasing should be Good and Bad: Measuring the Consistency of Debiasing Techniques in Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Robert D Morabito
Jad Kabbara
Ali Emami
158
7
0
23 May 2023
Towards preserving word order importance through Forced Invalidation
Towards preserving word order importance through Forced InvalidationConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023
Hadeel Al-Negheimish
Pranava Madhyastha
Alessandra Russo
105
3
0
11 Apr 2023
Language Model Behavior: A Comprehensive Survey
Language Model Behavior: A Comprehensive SurveyInternational Conference on Computational Logic (ICCL), 2023
Tyler A. Chang
Benjamin Bergen
VLMLRMLM&MA
304
136
0
20 Mar 2023
Denoising-based UNMT is more robust to word-order divergence than
  MASS-based UNMT
Denoising-based UNMT is more robust to word-order divergence than MASS-based UNMT
Tamali Banerjee
V. Rudra Murthy
P. Bhattacharyya
114
0
0
02 Mar 2023
Local Structure Matters Most in Most Languages
Local Structure Matters Most in Most Languages
Louis Clouâtre
Prasanna Parthasarathi
Payel Das
Sarath Chandar
152
1
0
09 Nov 2022
Word Order Matters when you Increase Masking
Word Order Matters when you Increase MaskingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Karim Lasri
Alessandro Lenci
Thierry Poibeau
216
7
0
08 Nov 2022
Processing Long Legal Documents with Pre-trained Transformers: Modding
  LegalBERT and Longformer
Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer
Dimitris Mamakas
Petros Tsotsi
Ion Androutsopoulos
Ilias Chalkidis
VLMAILaw
221
34
0
02 Nov 2022
Oolong: Investigating What Makes Transfer Learning Hard with Controlled
  Studies
Oolong: Investigating What Makes Transfer Learning Hard with Controlled StudiesConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Zhengxuan Wu
Alex Tamkin
Isabel Papadimitriou
231
14
0
24 Feb 2022
Grammatical cues to subjecthood are redundant in a majority of simple
  clauses across languages
Grammatical cues to subjecthood are redundant in a majority of simple clauses across languages
Kyle Mahowald
Evgeniia Diachek
E. Gibson
Evelina Fedorenko
Richard Futrell
278
10
0
30 Jan 2022
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Albert Webson
Ellie Pavlick
LRM
348
420
0
02 Sep 2021
On the Relationship between Self-Attention and Convolutional Layers
On the Relationship between Self-Attention and Convolutional LayersInternational Conference on Learning Representations (ICLR), 2019
Jean-Baptiste Cordonnier
Andreas Loukas
Martin Jaggi
525
601
0
08 Nov 2019
1