ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.08007
  4. Cited By
People cannot distinguish GPT-4 from a human in a Turing test

People cannot distinguish GPT-4 from a human in a Turing test

9 May 2024
Cameron R. Jones
Benjamin K. Bergen
    ELM
    DeLMO
ArXivPDFHTML

Papers citing "People cannot distinguish GPT-4 from a human in a Turing test"

19 / 19 papers shown
Title
Assesing LLMs in Art Contexts: Critique Generation and Theory of Mind Evaluation
Assesing LLMs in Art Contexts: Critique Generation and Theory of Mind Evaluation
Takaya Arita
Wenxian Zheng
Reiji Suzuki
Fuminori Akiba
22
0
0
17 Apr 2025
IMPersona: Evaluating Individual Level LM Impersonation
IMPersona: Evaluating Individual Level LM Impersonation
Quan Shi
Carlos E. Jimenez
Stephen Dong
Brian Seo
Caden Yao
Adam Kelch
Karthik Narasimhan
21
0
0
06 Apr 2025
Verification of Autonomous Neural Car Control with KeYmaera X
Verification of Autonomous Neural Car Control with KeYmaera X
Enguerrand Prebet
Samuel Teuber
André Platzer
31
0
0
04 Apr 2025
Do Large Language Models Exhibit Spontaneous Rational Deception?
Do Large Language Models Exhibit Spontaneous Rational Deception?
Samuel M. Taylor
Benjamin K. Bergen
LRM
38
0
0
31 Mar 2025
Stress Testing Generalization: How Minor Modifications Undermine Large Language Model Performance
Stress Testing Generalization: How Minor Modifications Undermine Large Language Model Performance
Guangxiang Zhao
Saier Hu
Xiaoqi Jian
Jinzhu Wu
Yuhan Wu
Change Jia
Lin Sun
Xiangzheng Zhang
69
0
0
18 Feb 2025
Lies, Damned Lies, and Distributional Language Statistics: Persuasion
  and Deception with Large Language Models
Lies, Damned Lies, and Distributional Language Statistics: Persuasion and Deception with Large Language Models
Cameron R. Jones
Benjamin Bergen
67
4
0
22 Dec 2024
Probing for Consciousness in Machines
Probing for Consciousness in Machines
Mathis Immertreu
A. Schilling
Andreas K. Maier
P. Krauss
AI4CE
67
1
0
25 Nov 2024
AI-Driven Agents with Prompts Designed for High Agreeableness Increase
  the Likelihood of Being Mistaken for a Human in the Turing Test
AI-Driven Agents with Prompts Designed for High Agreeableness Increase the Likelihood of Being Mistaken for a Human in the Turing Test
U. León-Domínguez
E. D. Flores-Flores
A. J. García-Jasso
M. K. Gómez-Cuellar
D. Torres-Sánchez
A. Basora-Marimon
AI4CE
66
0
0
20 Nov 2024
The Potential and Value of AI Chatbot in Personalized Cognitive Training
The Potential and Value of AI Chatbot in Personalized Cognitive Training
Z. Wang
Nan Chen
Luna Qiu
Ling Yue
Geli Guo
Yang Ou
Shiqi Jiang
Yuqing Yang
Lili Qiu
26
0
0
25 Oct 2024
From Imitation to Introspection: Probing Self-Consciousness in Language
  Models
From Imitation to Introspection: Probing Self-Consciousness in Language Models
Sirui Chen
Shu Yu
Shengjie Zhao
Chaochao Lu
MILM
LRM
30
1
0
24 Oct 2024
Self-Directed Turing Test for Large Language Models
Self-Directed Turing Test for Large Language Models
Weiqi Wu
Hongqiu Wu
Hai Zhao
LLMAG
LM&MA
ALM
LRM
21
2
0
19 Aug 2024
How to Measure the Intelligence of Large Language Models?
How to Measure the Intelligence of Large Language Models?
Nils Korber
Silvan Wehrli
Christopher Irrgang
ELM
ALM
36
0
0
30 Jul 2024
GPT-4 is judged more human than humans in displaced and inverted Turing
  tests
GPT-4 is judged more human than humans in displaced and inverted Turing tests
Ishika Rathi
Sydney Taylor
Benjamin K. Bergen
Cameron R. Jones
DeLMO
25
5
0
11 Jul 2024
Exploring Human-LLM Conversations: Mental Models and the Originator of
  Toxicity
Exploring Human-LLM Conversations: Mental Models and the Originator of Toxicity
Johannes Schneider
Arianna Casanova Flores
Anne-Catherine Kranz
37
2
0
08 Jul 2024
Societal Adaptation to Advanced AI
Societal Adaptation to Advanced AI
Jamie Bernardi
Gabriel Mukobi
Hilary Greaves
Lennart Heim
Markus Anderljung
40
4
0
16 May 2024
AI-Augmented Predictions: LLM Assistants Improve Human Forecasting
  Accuracy
AI-Augmented Predictions: LLM Assistants Improve Human Forecasting Accuracy
P. Schoenegger
Peter S. Park
Ezra Karger
P. Tetlock
23
14
0
12 Feb 2024
The Debate Over Understanding in AI's Large Language Models
The Debate Over Understanding in AI's Large Language Models
Melanie Mitchell
D. Krakauer
ELM
70
196
0
14 Oct 2022
Using cognitive psychology to understand GPT-3
Using cognitive psychology to understand GPT-3
Marcel Binz
Eric Schulz
ELM
LLMAG
236
435
0
21 Jun 2022
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1