ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07535
  4. Cited By
Evading Classifiers by Morphing in the Dark

Evading Classifiers by Morphing in the Dark

22 May 2017
Hung Dang
Yue Huang
E. Chang
    AAML
ArXivPDFHTML

Papers citing "Evading Classifiers by Morphing in the Dark"

20 / 20 papers shown
Title
Wild Networks: Exposure of 5G Network Infrastructures to Adversarial
  Examples
Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples
Giovanni Apruzzese
Rodion Vladimirov
A.T. Tastemirova
Pavel Laskov
AAML
40
15
0
04 Jul 2022
MaMaDroid2.0 -- The Holes of Control Flow Graphs
MaMaDroid2.0 -- The Holes of Control Flow Graphs
Harel Berger
Chen Hajaj
Enrico Mariconti
A. Dvir
36
4
0
28 Feb 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
WebGraph: Capturing Advertising and Tracking Information Flows for
  Robust Blocking
WebGraph: Capturing Advertising and Tracking Information Flows for Robust Blocking
S. Siby
Umar Iqbal
Steven Englehardt
Zubair Shafiq
Carmela Troncoso
AAML
25
28
0
23 Jul 2021
Omni: Automated Ensemble with Unexpected Models against Adversarial
  Evasion Attack
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack
Rui Shu
Tianpei Xia
Laurie A. Williams
Tim Menzies
AAML
32
15
0
23 Nov 2020
Explanation-Guided Backdoor Poisoning Attacks Against Malware
  Classifiers
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAML
SILM
29
18
0
02 Mar 2020
Malware Makeover: Breaking ML-based Static Analysis by Modifying
  Executable Bytes
Malware Makeover: Breaking ML-based Static Analysis by Modifying Executable Bytes
Keane Lucas
Mahmood Sharif
Lujo Bauer
Michael K. Reiter
S. Shintre
AAML
31
66
0
19 Dec 2019
Constrained Concealment Attacks against Reconstruction-based Anomaly
  Detectors in Industrial Control Systems
Constrained Concealment Attacks against Reconstruction-based Anomaly Detectors in Industrial Control Systems
Alessandro Erba
Riccardo Taormina
S. Galelli
Marcello Pogliani
Michele Carminati
S. Zanero
Nils Ole Tippenhauer
AAML
28
22
0
17 Jul 2019
Effectiveness of Distillation Attack and Countermeasure on Neural
  Network Watermarking
Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking
Ziqi Yang
Hung Dang
E. Chang
AAML
27
34
0
14 Jun 2019
On Training Robust PDF Malware Classifiers
On Training Robust PDF Malware Classifiers
Yizheng Chen
Shiqi Wang
Dongdong She
Suman Jana
AAML
50
68
0
06 Apr 2019
Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
Ziqi Yang
E. Chang
Zhenkai Liang
MLAU
33
60
0
22 Feb 2019
Easy to Fool? Testing the Anti-evasion Capabilities of PDF Malware
  Scanners
Easy to Fool? Testing the Anti-evasion Capabilities of PDF Malware Scanners
Saeed Ehteshamifar
Antonio Barresi
T. Gross
Michael Pradel
14
9
0
17 Jan 2019
Evading classifiers in discrete domains with provable optimality
  guarantees
Evading classifiers in discrete domains with provable optimality guarantees
B. Kulynych
Jamie Hayes
N. Samarin
Carmela Troncoso
AAML
21
19
0
25 Oct 2018
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural
  Networks against Adversarial Malware Samples
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural Networks against Adversarial Malware Samples
Deqiang Li
Ramesh Baral
Tao Li
Han Wang
Qianmu Li
Shouhuai Xu
AAML
28
21
0
18 Sep 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
19
11
0
08 Sep 2018
Gradient-Leaks: Understanding and Controlling Deanonymization in
  Federated Learning
Gradient-Leaks: Understanding and Controlling Deanonymization in Federated Learning
Tribhuvanesh Orekondy
Seong Joon Oh
Yang Zhang
Bernt Schiele
Mario Fritz
PICV
FedML
359
37
0
15 May 2018
Learning to Evade Static PE Machine Learning Malware Models via
  Reinforcement Learning
Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning
Hyrum S. Anderson
Anant Kharkar
Bobby Filar
David Evans
P. Roth
AAML
30
207
0
26 Jan 2018
Adversarial Deep Learning for Robust Detection of Binary Encoded Malware
Adversarial Deep Learning for Robust Detection of Binary Encoded Malware
Abdullah Al-Dujaili
Alex Huang
Erik Hemberg
Una-May O’Reilly
AAML
25
186
0
09 Jan 2018
Differentially Private Federated Learning: A Client Level Perspective
Differentially Private Federated Learning: A Client Level Perspective
Robin C. Geyer
T. Klein
Moin Nabi
FedML
42
1,280
0
20 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
40
1,390
0
08 Dec 2017
1