ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.11924
  4. Cited By
AI model GPT-3 (dis)informs us better than humans
v1v2 (latest)

AI model GPT-3 (dis)informs us better than humans

23 January 2023
Giovanni Spitale
Nikola Biller-Andorno
Federico Germani
    DeLMO
ArXiv (abs)PDFHTML

Papers citing "AI model GPT-3 (dis)informs us better than humans"

50 / 55 papers shown
Title
ChatGPT-generated texts show authorship traits that identify them as non-human
ChatGPT-generated texts show authorship traits that identify them as non-human
Vittoria Dentella
Weihang Huang
Silvia Angela Mansi
Jack Grieve
Evelina Leivada
DeLMO
12
0
0
22 Aug 2025
Linguistic and Embedding-Based Profiling of Texts generated by Humans and Large Language Models
Linguistic and Embedding-Based Profiling of Texts generated by Humans and Large Language Models
Sergio E. Zanotto
Segun Aroyehun
DeLMO
163
0
0
18 Jul 2025
PRISON: Unmasking the Criminal Potential of Large Language Models
PRISON: Unmasking the Criminal Potential of Large Language Models
Xinyi Wu
Geng Hong
Pei Chen
Yueyue Chen
Xudong Pan
Min Yang
86
0
0
19 Jun 2025
Sword and Shield: Uses and Strategies of LLMs in Navigating Disinformation
Sword and Shield: Uses and Strategies of LLMs in Navigating Disinformation
Gionnieve Lim
Bryan Chen Zhengyu Tan
Kellie Yu Hui Sim
Weiyan Shi
Ming Hui Chew
Ming Shan Hee
Roy Ka-wei Lee
S. Perrault
K. T. W. Choo
101
1
0
08 Jun 2025
Risks of AI-driven product development and strategies for their mitigation
Risks of AI-driven product development and strategies for their mitigation
Jan Göpfert
J. Weinand
Patrick Kuckertz
Noah Pflugradt
Jochen Linßen
103
1
0
28 May 2025
Domain Gating Ensemble Networks for AI-Generated Text Detection
Domain Gating Ensemble Networks for AI-Generated Text Detection
Arihant Tripathi
Liam Dugan
Charis Gao
Maggie Huan
Emma Jin
Peter Zhang
David Zhang
Julia Zhao
Chris Callison-Burch
VLM
87
0
0
20 May 2025
LLM-Generated Fake News Induces Truth Decay in News Ecosystem: A Case Study on Neural News Recommendation
LLM-Generated Fake News Induces Truth Decay in News Ecosystem: A Case Study on Neural News Recommendation
Beizhe Hu
Qiang Sheng
Juan Cao
Yang Li
Danding Wang
746
2
0
28 Apr 2025
Labeling Messages as AI-Generated Does Not Reduce Their Persuasive Effects
Labeling Messages as AI-Generated Does Not Reduce Their Persuasive Effects
Isabel O. Gallegos
Chen Shani
Weiyan Shi
Federico Bianchi
Izzy Gainsburg
Dan Jurafsky
Robb Willer
124
5
0
14 Apr 2025
Increasing happiness through conversations with artificial intelligence
Increasing happiness through conversations with artificial intelligence
Joseph Heffner
Chongyu Qin
Martin Chadwick
Chris Knutsen
Christopher Summerfield
Zeb Kurth-Nelson
Robb B. Rutledge
AI4MH
112
1
0
02 Apr 2025
Echoes of Power: Investigating Geopolitical Bias in US and China Large Language Models
Echoes of Power: Investigating Geopolitical Bias in US and China Large Language Models
Andre G. C. Pacheco
Athus Cavalini
Giovanni Comarela
141
2
0
20 Mar 2025
Large language models for automated scholarly paper review: A survey
Large language models for automated scholarly paper review: A survey
Zhenzhen Zhuang
Jiandong Chen
Hongfeng Xu
Yuwen Jiang
Jialiang Lin
160
13
0
17 Jan 2025
GenAI Content Detection Task 3: Cross-Domain Machine-Generated Text Detection Challenge
GenAI Content Detection Task 3: Cross-Domain Machine-Generated Text Detection Challenge
Liam Dugan
Andrew Zhu
Firoj Alam
Preslav Nakov
Marianna Apidianaki
Chris Callison-Burch
DeLMO
159
12
0
15 Jan 2025
Lies, Damned Lies, and Distributional Language Statistics: Persuasion
  and Deception with Large Language Models
Lies, Damned Lies, and Distributional Language Statistics: Persuasion and Deception with Large Language Models
Cameron R. Jones
Benjamin Bergen
239
10
0
22 Dec 2024
Evaluating the Performance of Large Language Models in Scientific Claim
  Detection and Classification
Evaluating the Performance of Large Language Models in Scientific Claim Detection and Classification
Tanjim Bin Faruk
134
0
0
21 Dec 2024
Persuasion with Large Language Models: a Survey
Persuasion with Large Language Models: a Survey
Alexander Rogiers
Sander Noels
Maarten Buyl
Tijl De Bie
90
20
0
11 Nov 2024
Using GPT Models for Qualitative and Quantitative News Analytics in the
  2024 US Presidental Election Process
Using GPT Models for Qualitative and Quantitative News Analytics in the 2024 US Presidental Election Process
Bohdan M. Pavlyshenko
71
0
0
21 Oct 2024
How will advanced AI systems impact democracy?
How will advanced AI systems impact democracy?
Christopher Summerfield
Lisa Argyle
Michiel Bakker
Teddy Collins
Esin Durmus
...
Elizabeth Seger
Divya Siddarth
Henrik Skaug Sætra
MH Tessler
M. Botvinick
136
7
0
27 Aug 2024
AI-Driven Review Systems: Evaluating LLMs in Scalable and Bias-Aware
  Academic Reviews
AI-Driven Review Systems: Evaluating LLMs in Scalable and Bias-Aware Academic Reviews
Keith Tyser
Ben Segev
Gaston Longhitano
Xin-Yu Zhang
Zachary Meeks
...
Nicholas Belsten
A. Shporer
Madeleine Udell
Dov Te’eni
Iddo Drori
94
31
0
19 Aug 2024
Large language models can consistently generate high-quality content for
  election disinformation operations
Large language models can consistently generate high-quality content for election disinformation operations
Angus R. Williams
Liam Burke-Moore
Ryan Sze-Yin Chan
Florence E. Enock
Federico Nanni
Tvesha Sippy
Yi-Ling Chung
Evelina Gabasova
Kobi Hackenburg
Jonathan Bright
94
7
0
13 Aug 2024
Scaling Trends in Language Model Robustness
Scaling Trends in Language Model Robustness
Nikolhaus Howe
Michal Zajac
I. R. McKenzie
Oskar Hollinsworth
Tom Tseng
Aaron David Tucker
Pierre-Luc Bacon
Adam Gleave
297
10
0
25 Jul 2024
MetaSumPerceiver: Multimodal Multi-Document Evidence Summarization for
  Fact-Checking
MetaSumPerceiver: Multimodal Multi-Document Evidence Summarization for Fact-Checking
Ting-Chih Chen
Chia-Wei Tang
Chris Thomas
134
7
0
18 Jul 2024
When LLMs Play the Telephone Game: Cultural Attractors as Conceptual Tools to Evaluate LLMs in Multi-turn Settings
When LLMs Play the Telephone Game: Cultural Attractors as Conceptual Tools to Evaluate LLMs in Multi-turn Settings
Jérémy Perez
Corentin Léger
Grgur Kovač
Cédric Colas
Gaia Molinaro
Maxime Derex
Pierre-Yves Oudeyer
Clément Moulin-Frier
186
3
0
05 Jul 2024
Catching Chameleons: Detecting Evolving Disinformation Generated using
  Large Language Models
Catching Chameleons: Detecting Evolving Disinformation Generated using Large Language Models
Bohan Jiang
Chengshuai Zhao
Zhen Tan
Huan Liu
104
2
0
26 Jun 2024
Investigating the Influence of Prompt-Specific Shortcuts in AI Generated
  Text Detection
Investigating the Influence of Prompt-Specific Shortcuts in AI Generated Text Detection
Choonghyun Park
Sungmin Cho
Junyeob Kim
Youna Kim
Taeuk Kim
Hyunsoo Cho
Hwiyeol Jo
Sang-goo Lee
Kang Min Yoo
AAML
93
1
0
24 Jun 2024
Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods
Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods
Kathleen C. Fraser
Hillary Dawkins
S. Kiritchenko
DeLMO
207
22
0
21 Jun 2024
PRISM: A Design Framework for Open-Source Foundation Model Safety
PRISM: A Design Framework for Open-Source Foundation Model Safety
Terrence Neumann
Bryan Jones
131
1
0
14 Jun 2024
RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text
  Detectors
RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors
Liam Dugan
Alyssa Hwang
Filip Trhlik
Josh Magnus Ludan
Andrew Zhu
Hainiu Xu
Daphne Ippolito
Christopher Callison-Burch
DeLMOAAML
164
79
0
13 May 2024
Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of
  Theories, Detection Methods, and Opportunities
Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of Theories, Detection Methods, and Opportunities
Xiaomin Yu
Yezhaohui Wang
Yanfang Chen
Zhen Tao
Dinghao Xi
Shichao Song
Pengnian Qi
Zhiyu Li
167
13
0
25 Apr 2024
Autonomous LLM-driven research from data to human-verifiable research
  papers
Autonomous LLM-driven research from data to human-verifiable research papers
Tal Ifargan
Lukas Hafner
Maor Kern
Ori Alcalay
Roy Kishony
155
29
0
24 Apr 2024
Fakes of Varying Shades: How Warning Affects Human Perception and
  Engagement Regarding LLM Hallucinations
Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM Hallucinations
Mahjabin Nahar
Haeseung Seo
Eun-Ju Lee
Aiping Xiong
Dongwon Lee
HILM
125
16
0
04 Apr 2024
Knowledge Conflicts for LLMs: A Survey
Knowledge Conflicts for LLMs: A Survey
Rongwu Xu
Zehan Qi
Zhijiang Guo
Cunxiang Wang
Hongru Wang
Yue Zhang
Wei Xu
509
164
0
13 Mar 2024
A Survey of AI-generated Text Forensic Systems: Detection, Attribution,
  and Characterization
A Survey of AI-generated Text Forensic Systems: Detection, Attribution, and Characterization
Tharindu Kumarage
Garima Agrawal
Paras Sheth
Raha Moraffah
Amanat Chadha
Joshua Garland
Huan Liu
DeLMO
105
18
0
02 Mar 2024
I Am Not Them: Fluid Identities and Persistent Out-group Bias in Large
  Language Models
I Am Not Them: Fluid Identities and Persistent Out-group Bias in Large Language Models
Wenchao Dong
Assem Zhunis
Hyojin Chin
Jiyoung Han
Meeyoung Cha
89
2
0
16 Feb 2024
Lying Blindly: Bypassing ChatGPT's Safeguards to Generate Hard-to-Detect
  Disinformation Claims at Scale
Lying Blindly: Bypassing ChatGPT's Safeguards to Generate Hard-to-Detect Disinformation Claims at Scale
Freddy Heppell
M. Bakir
Kalina Bontcheva
DeLMO
110
1
0
13 Feb 2024
Exploiting Novel GPT-4 APIs
Exploiting Novel GPT-4 APIs
Kellin Pelrine
Mohammad Taufeeque
Michal Zajkac
Euan McLean
Adam Gleave
SILM
79
26
0
21 Dec 2023
ChatGPT as a commenter to the news: can LLMs generate human-like
  opinions?
ChatGPT as a commenter to the news: can LLMs generate human-like opinions?
Rayden Tseng
Suzan Verberne
P. V. D. Putten
ALMDeLMOLLMAG
47
8
0
21 Dec 2023
In Generative AI we Trust: Can Chatbots Effectively Verify Political
  Information?
In Generative AI we Trust: Can Chatbots Effectively Verify Political Information?
Elizaveta Kuznetsova
M. Makhortykh
Victoria Vziatysheva
Martha Stolze
Ani Baghumyan
Aleksandra Urman
86
7
0
20 Dec 2023
On a Functional Definition of Intelligence
On a Functional Definition of Intelligence
Warisa Sritriratanarak
Paulo Garcia
79
0
0
15 Dec 2023
The Earth is Flat because...: Investigating LLMs' Belief towards
  Misinformation via Persuasive Conversation
The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Persuasive Conversation
Rongwu Xu
Brian S. Lin
Shujian Yang
Tianqi Zhang
Weiyan Shi
Tianwei Zhang
Zhixuan Fang
Wei Xu
Han Qiu
282
73
0
14 Dec 2023
Disentangling Perceptions of Offensiveness: Cultural and Moral
  Correlates
Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates
Aida Mostafazadeh Davani
Mark Díaz
Dylan K. Baker
Vinodkumar Prabhakaran
AAML
95
22
0
11 Dec 2023
Invisible Relevance Bias: Text-Image Retrieval Models Prefer
  AI-Generated Images
Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated Images
Shicheng Xu
Danyang Hou
Liang Pang
Jingcheng Deng
Jun Xu
Huawei Shen
Xueqi Cheng
108
18
0
23 Nov 2023
What Lies Beneath? Exploring the Impact of Underlying AI Model Updates
  in AI-Infused Systems
What Lies Beneath? Exploring the Impact of Underlying AI Model Updates in AI-Infused Systems
Vikram Mohanty
Jude Lim
Kurt Luther
88
0
0
17 Nov 2023
Adapting Fake News Detection to the Era of Large Language Models
Adapting Fake News Detection to the Era of Large Language Models
Jinyan Su
Claire Cardie
Preslav Nakov
DeLMO
148
25
0
02 Nov 2023
LLMs may Dominate Information Access: Neural Retrievers are Biased
  Towards LLM-Generated Texts
LLMs may Dominate Information Access: Neural Retrievers are Biased Towards LLM-Generated Texts
Sunhao Dai
Yuqi Zhou
Liang Pang
Weihao Liu
Xiaolin Hu
Yong Liu
Xiao Zhang
Gang Wang
Jun Xu
142
44
0
31 Oct 2023
Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting
  Elusive Disinformation
Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting Elusive Disinformation
Jason Samuel Lucas
Adaku Uchendu
Michiharu Yamashita
Jooyoung Lee
Shaurya Rohatgi
Dongwon Lee
127
53
0
24 Oct 2023
Disinformation Detection: An Evolving Challenge in the Age of LLMs
Disinformation Detection: An Evolving Challenge in the Age of LLMs
Qinglong Cao
Yuntian Chen
Ayushi Nirmal
Xiaokang Yang
DeLMO
135
60
0
25 Sep 2023
Can LLM-Generated Misinformation Be Detected?
Can LLM-Generated Misinformation Be Detected?
Canyu Chen
Kai Shu
DeLMO
364
206
0
25 Sep 2023
Overview of AuTexTification at IberLEF 2023: Detection and Attribution
  of Machine-Generated Text in Multiple Domains
Overview of AuTexTification at IberLEF 2023: Detection and Attribution of Machine-Generated Text in Multiple Domains
A. Sarvazyan
José Ángel González
Marc Franco-Salvador
Francisco Rangel
Berta Chulvi
Paolo Rosso
DeLMO
127
69
0
20 Sep 2023
Generative AI
Generative AI
Stefan Feuerriegel
Jochen Hartmann
Christian Janiesch
Patrick Zschech
162
810
0
13 Sep 2023
Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and
  Vulnerabilities
Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities
Maximilian Mozes
Xuanli He
Bennett Kleinberg
Lewis D. Griffin
114
94
0
24 Aug 2023
12
Next