ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.20940
  4. Cited By
Attacking Misinformation Detection Using Adversarial Examples Generated
  by Language Models

Attacking Misinformation Detection Using Adversarial Examples Generated by Language Models

28 October 2024
Piotr Przybyła
    AAML
ArXiv (abs)PDFHTML

Papers citing "Attacking Misinformation Detection Using Adversarial Examples Generated by Language Models"

1 / 1 papers shown
Title
CAMOUFLAGE: Exploiting Misinformation Detection Systems Through LLM-driven Adversarial Claim Transformation
CAMOUFLAGE: Exploiting Misinformation Detection Systems Through LLM-driven Adversarial Claim Transformation
Mazal Bethany
Nishant Vishwamitra
Cho-Yu Chiang
Peyman Najafirad
AAML
56
0
0
03 May 2025
1