ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09153
  4. Cited By

Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection

AAAI Conference on Artificial Intelligence (AAAI), 2025
12 March 2025
Chaowei Zhang
Zongling Feng
Zewei Zhang
Jipeng Qiang
Guandong Xu
Yun Li
    LRM
ArXiv (abs)PDFHTML

Papers citing "Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection"

2 / 2 papers shown
Title
SARC: Sentiment-Augmented Deep Role Clustering for Fake News Detection
SARC: Sentiment-Augmented Deep Role Clustering for Fake News Detection
Jingqing Wang
Jiaxing Shang
Rong Xu
Fei Hao
Tianjin Huang
Geyong Min
52
0
0
28 Oct 2025
Misinformation Detection using Large Language Models with Explainability
Misinformation Detection using Large Language Models with Explainability
Jainee Patel
Chintan Bhatt
Himani Trivedi
Thanh Thi Nguyen
49
0
0
21 Oct 2025
1