ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09657
  4. Cited By
Fake News Detection via NLP is Vulnerable to Adversarial Attacks

Fake News Detection via NLP is Vulnerable to Adversarial Attacks

5 January 2019
Zhixuan Zhou
Huankang Guan
Meghana Moorthy Bhat
Justin Hsu
ArXivPDFHTML

Papers citing "Fake News Detection via NLP is Vulnerable to Adversarial Attacks"

1 / 1 papers shown
Title
Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection
Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection
Sungwon Park
Sungwon Han
Xing Xie
Jae-Gil Lee
Meeyoung Cha
63
1
0
17 Jun 2024
1