ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.01386
  4. Cited By
Can Language Models Be Tricked by Language Illusions? Easier with
  Syntax, Harder with Semantics

Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics

2 November 2023
Yuhan Zhang
Edward Gibson
Forrest Davis
ArXivPDFHTML

Papers citing "Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics"

4 / 4 papers shown
Title
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting
  Human Language Comprehension Metrics
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics
J. Michaelov
Catherine Arnett
Benjamin Bergen
24
3
0
30 Apr 2024
Simple Linguistic Inferences of Large Language Models (LLMs): Blind
  Spots and Blinds
Simple Linguistic Inferences of Large Language Models (LLMs): Blind Spots and Blinds
Victoria Basmov
Yoav Goldberg
Reut Tsarfaty
ReLM
LRM
19
5
0
24 May 2023
Syntactic Surprisal From Neural Models Predicts, But Underestimates,
  Human Processing Difficulty From Syntactic Ambiguities
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
Suhas Arehalli
Brian Dillon
Tal Linzen
26
35
0
21 Oct 2022
Frequency Effects on Syntactic Rule Learning in Transformers
Frequency Effects on Syntactic Rule Learning in Transformers
Jason W. Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
80
62
0
14 Sep 2021
1