ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.02628
  4. Cited By
PRADA: Protecting against DNN Model Stealing Attacks

PRADA: Protecting against DNN Model Stealing Attacks

7 May 2018
Mika Juuti
S. Szyller
Samuel Marchal
Nadarajah Asokan
    SILM
    AAML
ArXivPDFHTML

Papers citing "PRADA: Protecting against DNN Model Stealing Attacks"

14 / 64 papers shown
Title
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aivodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
27
51
0
03 Sep 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
30
213
0
15 Jul 2020
Stealing Deep Reinforcement Learning Models for Fun and Profit
Stealing Deep Reinforcement Learning Models for Fun and Profit
Kangjie Chen
Shangwei Guo
Tianwei Zhang
Xiaofei Xie
Yang Liu
MLAU
MIACV
OffRL
24
45
0
09 Jun 2020
Perturbing Inputs to Prevent Model Stealing
Perturbing Inputs to Prevent Model Stealing
J. Grana
AAML
SILM
16
5
0
12 May 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient
  Estimation
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
21
146
0
06 May 2020
ENSEI: Efficient Secure Inference via Frequency-Domain Homomorphic
  Convolution for Privacy-Preserving Visual Recognition
ENSEI: Efficient Secure Inference via Frequency-Domain Homomorphic Convolution for Privacy-Preserving Visual Recognition
S. Bian
Tianchen Wang
Masayuki Hiromoto
Yiyu Shi
Takashi Sato
FedML
8
30
0
11 Mar 2020
NASS: Optimizing Secure Inference via Neural Architecture Search
NASS: Optimizing Secure Inference via Neural Architecture Search
S. Bian
Weiwen Jiang
Qing Lu
Yiyu Shi
Takashi Sato
19
25
0
30 Jan 2020
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAU
FedML
AAML
31
144
0
02 Dec 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
26
68
0
06 Nov 2019
IPGuard: Protecting Intellectual Property of Deep Neural Networks via
  Fingerprinting the Classification Boundary
IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
11
106
0
28 Oct 2019
Extraction of Complex DNN Models: Real Threat or Boogeyman?
Extraction of Complex DNN Models: Real Threat or Boogeyman?
B. Atli
S. Szyller
Mika Juuti
Samuel Marchal
Nadarajah Asokan
MLAU
MIACV
25
45
0
11 Oct 2019
A framework for the extraction of Deep Neural Networks by leveraging
  public data
A framework for the extraction of Deep Neural Networks by leveraging public data
Soham Pal
Yash Gupta
Aditya Shukla
Aditya Kanade
S. Shevade
V. Ganapathy
FedML
MLAU
MIACV
19
56
0
22 May 2019
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
L. Hanzlik
Yang Zhang
Kathrin Grosse
A. Salem
Maximilian Augustin
Michael Backes
Mario Fritz
OffRL
14
103
0
01 Aug 2018
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,835
0
08 Jul 2016
Previous
12