ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.11111
  4. Cited By
Inducing anxiety in large language models increases exploration and bias

Inducing anxiety in large language models increases exploration and bias

21 April 2023
Julian Coda-Forno
Kristin Witte
A. Jagadish
Marcel Binz
Zeynep Akata
Eric Schulz
    AI4CE
ArXivPDFHTML

Papers citing "Inducing anxiety in large language models increases exploration and bias"

12 / 12 papers shown
Title
Enhancing Patient-Centric Communication: Leveraging LLMs to Simulate Patient Perspectives
Enhancing Patient-Centric Communication: Leveraging LLMs to Simulate Patient Perspectives
Xinyao Ma
Rui Zhu
Zihao Wang
Jingwei Xiong
Qingyu Chen
Haixu Tang
L. Jean Camp
Lucila Ohno-Machado
LM&MA
44
0
0
12 Jan 2025
Assessment and manipulation of latent constructs in pre-trained language models using psychometric scales
Assessment and manipulation of latent constructs in pre-trained language models using psychometric scales
Maor Reuben
Ortal Slobodin
Aviad Elyshar
Idan-Chaim Cohen
Orna Braun-Lewensohn
Odeya Cohen
Rami Puzis
38
0
0
29 Sep 2024
With Ears to See and Eyes to Hear: Sound Symbolism Experiments with
  Multimodal Large Language Models
With Ears to See and Eyes to Hear: Sound Symbolism Experiments with Multimodal Large Language Models
Tyler Loakman
Yucheng Li
Chenghua Lin
VLM
29
1
0
23 Sep 2024
Deception Abilities Emerged in Large Language Models
Deception Abilities Emerged in Large Language Models
Thilo Hagendorff
LLMAG
28
74
0
31 Jul 2023
Turning large language models into cognitive models
Turning large language models into cognitive models
Marcel Binz
Eric Schulz
28
52
0
06 Jun 2023
Playing repeated games with Large Language Models
Playing repeated games with Large Language Models
Elif Akata
Lion Schulz
Julian Coda-Forno
Seong Joon Oh
Matthias Bethge
Eric Schulz
402
118
0
26 May 2023
In-Context Impersonation Reveals Large Language Models' Strengths and
  Biases
In-Context Impersonation Reveals Large Language Models' Strengths and Biases
Leonard Salewski
Stephan Alaniz
Isabel Rio-Torto
Eric Schulz
Zeynep Akata
22
148
0
24 May 2023
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language,
  Vision, and Action
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
Dhruv Shah
B. Osinski
Brian Ichter
Sergey Levine
LM&Ro
139
436
0
10 Jul 2022
Using cognitive psychology to understand GPT-3
Using cognitive psychology to understand GPT-3
Marcel Binz
Eric Schulz
ELM
LLMAG
242
439
0
21 Jun 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
293
4,077
0
24 May 2022
Fantastically Ordered Prompts and Where to Find Them: Overcoming
  Few-Shot Prompt Order Sensitivity
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
277
1,117
0
18 Apr 2021
What Makes Good In-Context Examples for GPT-$3$?
What Makes Good In-Context Examples for GPT-333?
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAML
RALM
275
1,312
0
17 Jan 2021
1