ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.07970
  4. Cited By
How Vulnerable Are Automatic Fake News Detection Methods to Adversarial
  Attacks?

How Vulnerable Are Automatic Fake News Detection Methods to Adversarial Attacks?

16 July 2021
Camille Koenders
Johannes Filla
Nicolai Schneider
Vinicius Woloszyn
    GNN
ArXivPDFHTML

Papers citing "How Vulnerable Are Automatic Fake News Detection Methods to Adversarial Attacks?"

1 / 1 papers shown
Title
Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection
Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection
Sungwon Park
Sungwon Han
Xing Xie
Jae-Gil Lee
Meeyoung Cha
53
1
0
17 Jun 2024
1