ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.02683
  4. Cited By
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure
  Dataset Release

Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release

16 February 2021
Liam H. Fowl
Ping Yeh-Chiang
Micah Goldblum
Jonas Geiping
Arpit Bansal
W. Czaja
Tom Goldstein
ArXivPDFHTML

Papers citing "Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release"

10 / 10 papers shown
Title
Timber! Poisoning Decision Trees
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
25
0
0
01 Oct 2024
Re-thinking Data Availablity Attacks Against Deep Neural Networks
Re-thinking Data Availablity Attacks Against Deep Neural Networks
Bin Fang
Bo-wen Li
Shuang Wu
Ran Yi
Shouhong Ding
Lizhuang Ma
AAML
35
0
0
18 May 2023
Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation
Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation
Yixin Liu
Chenrui Fan
Pan Zhou
Lichao Sun
6
4
0
05 Mar 2023
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
18
12
0
29 Aug 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
30
24
0
19 Apr 2022
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Harrison Foley
Liam H. Fowl
Tom Goldstein
Gavin Taylor
AAML
17
9
0
03 Jan 2022
Availability Attacks Create Shortcuts
Availability Attacks Create Shortcuts
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
AAML
25
57
0
01 Nov 2021
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
23
131
0
21 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
  Trained from Scratch
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
22
123
0
16 Jun 2021
Disrupting Model Training with Adversarial Shortcuts
Disrupting Model Training with Adversarial Shortcuts
Ivan Evtimov
Ian Covert
Aditya Kusupati
Tadayoshi Kohno
AAML
13
10
0
12 Jun 2021
1