ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.01461
  4. Cited By
Will the Real Linda Please Stand up...to Large Language Models?
  Examining the Representativeness Heuristic in LLMs
v1v2 (latest)

Will the Real Linda Please Stand up...to Large Language Models? Examining the Representativeness Heuristic in LLMs

1 April 2024
Pengda Wang
Zilin Xiao
Hanjie Chen
Frederick L. Oswald
ArXiv (abs)PDFHTMLGithub (4★)

Papers citing "Will the Real Linda Please Stand up...to Large Language Models? Examining the Representativeness Heuristic in LLMs"

8 / 8 papers shown
The Bias is in the Details: An Assessment of Cognitive Bias in LLMs
The Bias is in the Details: An Assessment of Cognitive Bias in LLMs
R. Alexander Knipper
Charles S. Knipper
Kaiqi Zhang
Valerie Sims
Clint Bowers
Santu Karmaker
127
2
0
26 Sep 2025
Can LLM "Self-report"?: Evaluating the Validity of Self-report Scales in Measuring Personality Design in LLM-based Chatbots
Huiqi Zou
Pengda Wang
Zihan Yan
Tianjun Sun
Ziang Xiao
535
19
0
29 Nov 2024
Meaningless is better: hashing bias-inducing words in LLM prompts improves performance in logical reasoning and statistical learning
Meaningless is better: hashing bias-inducing words in LLM prompts improves performance in logical reasoning and statistical learning
Milena Chadimová
Eduard Jurášek
Tomáš Kliegr
533
0
0
26 Nov 2024
From Babbling to Fluency: Evaluating the Evolution of Language Models in
  Terms of Human Language Acquisition
From Babbling to Fluency: Evaluating the Evolution of Language Models in Terms of Human Language Acquisition
Qiyuan Yang
Pengda Wang
Luke D. Plonsky
Frederick L. Oswald
Hanjie Chen
ELM
324
2
0
17 Oct 2024
RepoGraph: Enhancing AI Software Engineering with Repository-level Code Graph
RepoGraph: Enhancing AI Software Engineering with Repository-level Code GraphInternational Conference on Learning Representations (ICLR), 2024
Siru Ouyang
Wenhao Yu
Kaixin Ma
Zilin Xiao
Zizhuo Zhang
Mengzhao Jia
Jiawei Han
Han Zhang
Dong Yu
457
73
0
03 Oct 2024
A Peek into Token Bias: Large Language Models Are Not Yet Genuine
  Reasoners
A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners
Bowen Jiang
Yangxinyu Xie
Zhuoqun Hao
Xiaomeng Wang
Tanwi Mallick
Weijie J. Su
Camillo J Taylor
Dan Roth
LRM
371
107
0
16 Jun 2024
Towards Rationality in Language and Multimodal Agents: A Survey
Towards Rationality in Language and Multimodal Agents: A Survey
Bowen Jiang
Yangxinyu Xie
Xiaomeng Wang
Yuan Yuan
Camillo J Taylor
Tanwi Mallick
Weijie J. Su
Camillo J. Taylor
Tanwi Mallick
LLMAG
455
4
0
01 Jun 2024
Prompting Techniques for Reducing Social Bias in LLMs through System 1 and System 2 Cognitive Processes
Prompting Techniques for Reducing Social Bias in LLMs through System 1 and System 2 Cognitive Processes
M. Kamruzzaman
Gene Louis Kim
565
41
0
26 Apr 2024
1
Page 1 of 1