ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20850
  4. Cited By
Both Direct and Indirect Evidence Contribute to Dative Alternation Preferences in Language Models
v1v2v3 (latest)

Both Direct and Indirect Evidence Contribute to Dative Alternation Preferences in Language Models

26 March 2025
Qing Yao
Kanishka Misra
Leonie Weissweiler
Kyle Mahowald
ArXiv (abs)PDFHTML

Papers citing "Both Direct and Indirect Evidence Contribute to Dative Alternation Preferences in Language Models"

3 / 3 papers shown
Unpacking Let Alone: Human-Scale Models Generalize to a Rare Construction in Form but not Meaning
Unpacking Let Alone: Human-Scale Models Generalize to a Rare Construction in Form but not Meaning
Wesley Scivetti
Tatsuya Aoyama
Ethan Wilcox
Nathan Schneider
185
4
0
04 Jun 2025
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Alex Warstadt
Aaron Mueller
Leshem Choshen
E. Wilcox
Chengxu Zhuang
...
Rafael Mosquera
Bhargavi Paranjape
Adina Williams
Tal Linzen
Robert Bamler
596
166
0
10 Apr 2025
Can Language Models Learn Typologically Implausible Languages?
Can Language Models Learn Typologically Implausible Languages?
Tianyang Xu
Tatsuki Kuribayashi
Yohei Oseki
Robert Bamler
Alex Warstadt
243
5
0
17 Feb 2025
1