ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.08190
  4. Cited By
Assessing the Effectiveness of GPT-3 in Detecting False Political
  Statements: A Case Study on the LIAR Dataset

Assessing the Effectiveness of GPT-3 in Detecting False Political Statements: A Case Study on the LIAR Dataset

14 June 2023
Mars Gokturk Buchholz
ArXivPDFHTML

Papers citing "Assessing the Effectiveness of GPT-3 in Detecting False Political Statements: A Case Study on the LIAR Dataset"

7 / 7 papers shown
Title
Firm or Fickle? Evaluating Large Language Models Consistency in Sequential Interactions
Firm or Fickle? Evaluating Large Language Models Consistency in Sequential Interactions
Yubo Li
Yidi Miao
Xueying Ding
Ramayya Krishnan
R. Padman
44
0
0
28 Mar 2025
"The Data Says Otherwise"-Towards Automated Fact-checking and
  Communication of Data Claims
"The Data Says Otherwise"-Towards Automated Fact-checking and Communication of Data Claims
Yu Fu
Shunan Guo
Jane Hoffswell
Victor S. Bursztyn
Ryan A. Rossi
J. Stasko
31
2
0
16 Sep 2024
Identifying the sources of ideological bias in GPT models through
  linguistic variation in output
Identifying the sources of ideological bias in GPT models through linguistic variation in output
Christina Walker
Joan C. Timoneda
31
0
0
09 Sep 2024
CommunityKG-RAG: Leveraging Community Structures in Knowledge Graphs for
  Advanced Retrieval-Augmented Generation in Fact-Checking
CommunityKG-RAG: Leveraging Community Structures in Knowledge Graphs for Advanced Retrieval-Augmented Generation in Fact-Checking
Rong-Ching Chang
Jiawei Zhang
49
2
0
16 Aug 2024
Generative Large Language Models in Automated Fact-Checking: A Survey
Generative Large Language Models in Automated Fact-Checking: A Survey
Ivan Vykopal
Matúš Pikuliak
Simon Ostermann
Marian Simko
HILM
43
5
0
02 Jul 2024
MMIDR: Teaching Large Language Model to Interpret Multimodal
  Misinformation via Knowledge Distillation
MMIDR: Teaching Large Language Model to Interpret Multimodal Misinformation via Knowledge Distillation
Longzheng Wang
Xiaohan Xu
Lei Zhang
Jiarui Lu
Yongxiu Xu
Hongbo Xu
Xuancheng Huang
Chuang Zhang
46
4
0
21 Mar 2024
Can LLM-Generated Misinformation Be Detected?
Can LLM-Generated Misinformation Be Detected?
Canyu Chen
Kai Shu
DeLMO
41
159
0
25 Sep 2023
1