ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.10686
  4. Cited By
Post-breach Recovery: Protection against White-box Adversarial Examples
  for Leaked DNN Models

Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models

21 May 2022
Shawn Shan
Wen-Luan Ding
Emily Wenger
Haitao Zheng
Ben Y. Zhao
    AAML
ArXivPDFHTML

Papers citing "Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models"

4 / 4 papers shown
Title
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
27
75
0
29 Dec 2022
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
86
50
0
13 Oct 2021
Adversarial Attack across Datasets
Adversarial Attack across Datasets
Yunxiao Qin
Yuanhao Xiong
Jinfeng Yi
Lihong Cao
Cho-Jui Hsieh
AAML
30
3
0
13 Oct 2021
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
250
5,830
0
08 Jul 2016
1