ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.03371
  4. Cited By
Explainable Fake News Detection With Large Language Model via Defense
  Among Competing Wisdom

Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom

6 May 2024
Bo Wang
Jing Ma
Hongzhan Lin
Zhiwei Yang
Ruichao Yang
Yuan Tian
Yi-Ju Chang
    AAML
ArXivPDFHTML

Papers citing "Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom"

8 / 8 papers shown
Title
Detecting Manipulated Contents Using Knowledge-Grounded Inference
Detecting Manipulated Contents Using Knowledge-Grounded Inference
Mark Huasong Meng
Ruizhe Wang
Meng Xu
Chuan Yan
Guangdong Bai
38
0
0
29 Apr 2025
FACT-AUDIT: An Adaptive Multi-Agent Framework for Dynamic Fact-Checking Evaluation of Large Language Models
FACT-AUDIT: An Adaptive Multi-Agent Framework for Dynamic Fact-Checking Evaluation of Large Language Models
Hongzhan Lin
Yang Deng
Yuxuan Gu
Wenxuan Zhang
Jing Ma
See-Kiong Ng
Tat-Seng Chua
LLMAG
KELM
HILM
63
0
0
25 Feb 2025
LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content
LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content
Mohamed Bayan Kmainasi
Ali Ezzat Shahroor
Maram Hasanain
Sahinur Rahman Laskar
Naeemul Hassan
Firoj Alam
31
1
0
20 Oct 2024
MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language Models
MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language Models
Shengkang Wang
Hongzhan Lin
Ziyang Luo
Zhen Ye
Guang Chen
Jing Ma
53
3
0
17 Jun 2024
Adaptive Activation Steering: A Tuning-Free LLM Truthfulness Improvement Method for Diverse Hallucinations Categories
Adaptive Activation Steering: A Tuning-Free LLM Truthfulness Improvement Method for Diverse Hallucinations Categories
Tianlong Wang
Xianfeng Jiao
Yifan He
Zhongzhi Chen
Yinghao Zhu
Xu Chu
Junyi Gao
Yasha Wang
Liantao Ma
LLMSV
57
7
0
26 May 2024
Can Large Language Models Be an Alternative to Human Evaluations?
Can Large Language Models Be an Alternative to Human Evaluations?
Cheng-Han Chiang
Hung-yi Lee
ALM
LM&MA
206
559
0
03 May 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Explainable Automated Fact-Checking for Public Health Claims
Explainable Automated Fact-Checking for Public Health Claims
Neema Kotonya
Francesca Toni
202
247
0
19 Oct 2020
1