ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.07773
  4. Cited By
Poisoning Attacks with Generative Adversarial Nets

Poisoning Attacks with Generative Adversarial Nets

18 June 2019
Luis Muñoz-González
Bjarne Pfitzner
Matteo Russo
Javier Carnerero-Cano
Emil C. Lupu
    AAML
ArXivPDFHTML

Papers citing "Poisoning Attacks with Generative Adversarial Nets"

14 / 14 papers shown
Title
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
S. M. Fazle
J. Mondal
Meem Arafat Manab
Xi Xiao
Sarfaraz Newaz
AAML
21
0
0
18 Oct 2023
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch
  Sampling
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling
Ethan Wisdom
Tejas Gokhale
Chaowei Xiao
Yezhou Yang
18
0
0
30 Mar 2023
Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of
  Perturbation and AI Techniques
Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of Perturbation and AI Techniques
S. Dhesi
Laura Fontes
P. Machado
I. Ihianle
Farhad Fassihi Tash
D. Adama
AAML
19
4
0
22 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
28
20
0
14 Feb 2023
Generative Poisoning Using Random Discriminators
Generative Poisoning Using Random Discriminators
Dirren van Vlijmen
A. Kolmus
Zhuoran Liu
Zhengyu Zhao
Martha Larson
23
2
0
02 Nov 2022
Transferable Graph Backdoor Attack
Transferable Graph Backdoor Attack
Shuiqiao Yang
Bao Gia Doan
Paul Montague
O. Vel
Tamas Abraham
S. Çamtepe
D. Ranasinghe
S. Kanhere
AAML
34
36
0
21 Jun 2022
Integrity Authentication in Tree Models
Integrity Authentication in Tree Models
Weijie Zhao
Yingjie Lao
Ping Li
54
5
0
30 May 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
30
24
0
19 Apr 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
17
37
0
21 Feb 2022
FROB: Few-shot ROBust Model for Classification and Out-of-Distribution
  Detection
FROB: Few-shot ROBust Model for Classification and Out-of-Distribution Detection
Nikolaos Dionelis
Mehrdad Yaghoobi
Sotirios A. Tsaftaris
OODD
11
4
0
30 Nov 2021
Disrupting Model Training with Adversarial Shortcuts
Disrupting Model Training with Adversarial Shortcuts
Ivan Evtimov
Ian Covert
Aditya Kusupati
Tadayoshi Kohno
AAML
13
10
0
12 Jun 2021
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
17
86
0
08 May 2021
Quantitative robustness of instance ranking problems
Quantitative robustness of instance ranking problems
Tino Werner
13
2
0
12 Mar 2021
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
W. R. Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
1