ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.12363
  4. Cited By
Emergent Word Order Universals from Cognitively-Motivated Language
  Models

Emergent Word Order Universals from Cognitively-Motivated Language Models

19 February 2024
Tatsuki Kuribayashi
Ryo Ueda
Ryosuke Yoshida
Yohei Oseki
Ted Briscoe
Timothy Baldwin
ArXivPDFHTML

Papers citing "Emergent Word Order Universals from Cognitively-Motivated Language Models"

7 / 7 papers shown
Title
Frequency Explains the Inverse Correlation of Large Language Models'
  Size, Training Data Amount, and Surprisal's Fit to Reading Times
Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times
Byung-Doh Oh
Shisen Yue
William Schuler
41
14
0
03 Feb 2024
Communication Drives the Emergence of Language Universals in Neural
  Agents: Evidence from the Word-order/Case-marking Trade-off
Communication Drives the Emergence of Language Universals in Neural Agents: Evidence from the Word-order/Case-marking Trade-off
Yuchen Lian
Arianna Bisazza
Tessa Verhoef
33
8
0
30 Jan 2023
Discourse Context Predictability Effects in Hindi Word Order
Discourse Context Predictability Effects in Hindi Word Order
Sidharth Ranjan
Marten van Schijndel
Sumeet Agarwal
Rajakrishnan Rajkumar
24
2
0
25 Oct 2022
Emergent Communication: Generalization and Overfitting in Lewis Games
Emergent Communication: Generalization and Overfitting in Lewis Games
Mathieu Rita
Corentin Tallec
Paul Michel
Jean-Bastien Grill
Olivier Pietquin
Emmanuel Dupoux
Florian Strub
AI4CE
82
23
0
30 Sep 2022
Neural Networks and the Chomsky Hierarchy
Neural Networks and the Chomsky Hierarchy
Grégoire Delétang
Anian Ruoss
Jordi Grau-Moya
Tim Genewein
L. Wenliang
...
Chris Cundy
Marcus Hutter
Shane Legg
Joel Veness
Pedro A. Ortega
UQCV
96
129
0
05 Jul 2022
Context Limitations Make Neural Language Models More Human-Like
Context Limitations Make Neural Language Models More Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
44
29
0
23 May 2022
Modeling Human Sentence Processing with Left-Corner Recurrent Neural
  Network Grammars
Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Ryo Yoshida
Hiroshi Noji
Yohei Oseki
26
8
0
10 Sep 2021
1