ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00937
  4. Cited By
Radioactive data: tracing through training

Radioactive data: tracing through training

3 February 2020
Alexandre Sablayrolles
Matthijs Douze
Cordelia Schmid
Hervé Jégou
ArXivPDFHTML

Papers citing "Radioactive data: tracing through training"

47 / 47 papers shown
Title
Towards Artificial General or Personalized Intelligence? A Survey on Foundation Models for Personalized Federated Intelligence
Towards Artificial General or Personalized Intelligence? A Survey on Foundation Models for Personalized Federated Intelligence
Yu Qiao
Huy Q. Le
Avi Deb Raha
Phuong-Nam Tran
Apurba Adhikary
Mengchun Zhang
Loc X. Nguyen
Eui-nam Huh
Dusit Niyato
C. Hong
AI4CE
31
0
0
11 May 2025
MultiNeRF: Multiple Watermark Embedding for Neural Radiance Fields
MultiNeRF: Multiple Watermark Embedding for Neural Radiance Fields
Yash Kulthe
Andrew Gilbert
John Collomosse
41
0
0
03 Apr 2025
Instance-Level Data-Use Auditing of Visual ML Models
Instance-Level Data-Use Auditing of Visual ML Models
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
MLAU
60
0
0
28 Mar 2025
Obliviate: Efficient Unmemorization for Protecting Intellectual Property in Large Language Models
Obliviate: Efficient Unmemorization for Protecting Intellectual Property in Large Language Models
M. Russinovich
Ahmed Salem
MU
CLL
59
0
0
20 Feb 2025
Towards Data Governance of Frontier AI Models
Towards Data Governance of Frontier AI Models
Jason Hausenloy
Duncan McClements
Madhavendra Thakur
72
1
0
05 Dec 2024
SoK: Dataset Copyright Auditing in Machine Learning Systems
SoK: Dataset Copyright Auditing in Machine Learning Systems
L. Du
Xuanru Zhou
M. Chen
Chusong Zhang
Zhou Su
Peng Cheng
Jiming Chen
Zhikun Zhang
MLAU
18
3
0
22 Oct 2024
Towards Reliable Verification of Unauthorized Data Usage in Personalized
  Text-to-Image Diffusion Models
Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models
Boheng Li
Yanhao Wei
Yankai Fu
Z. Wang
Yiming Li
Jie Zhang
Run Wang
Tianwei Zhang
DiffM
AAML
24
9
0
14 Oct 2024
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data
  Poisoning
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning
Wassim Bouaziz
El-Mahdi El-Mhamdi
Nicolas Usunier
TDI
AAML
31
1
0
09 Oct 2024
Ward: Provable RAG Dataset Inference via LLM Watermarks
Ward: Provable RAG Dataset Inference via LLM Watermarks
Nikola Jovanović
Robin Staab
Maximilian Baader
Martin Vechev
139
1
0
04 Oct 2024
Proactive Schemes: A Survey of Adversarial Attacks for Social Good
Proactive Schemes: A Survey of Adversarial Attacks for Social Good
Vishal Asnani
Xi Yin
Xiaoming Liu
AAML
36
1
0
24 Sep 2024
MemControl: Mitigating Memorization in Diffusion Models via Automated Parameter Selection
MemControl: Mitigating Memorization in Diffusion Models via Automated Parameter Selection
Raman Dutt
Pedro Sanchez
Ondrej Bohdal
Sotirios A. Tsaftaris
Timothy M. Hospedales
MedIm
33
2
0
29 May 2024
The Mosaic Memory of Large Language Models
The Mosaic Memory of Large Language Models
Igor Shilov
Matthieu Meeus
Yves-Alexandre de Montjoye
39
3
0
24 May 2024
AI Competitions and Benchmarks: Dataset Development
AI Competitions and Benchmarks: Dataset Development
Romain Egele
Julio C. S. Jacques Junior
Jan N. van Rijn
Isabelle M Guyon
Xavier Baró
Albert Clapés
Prasanna Balaprakash
Sergio Escalera
T. Moeslund
Jun Wan
42
0
0
15 Apr 2024
ProMark: Proactive Diffusion Watermarking for Causal Attribution
ProMark: Proactive Diffusion Watermarking for Causal Attribution
Vishal Asnani
John Collomosse
Tu Bui
Xiaoming Liu
S. Agarwal
WIGM
DiffM
52
13
0
14 Mar 2024
AMUSE: Adaptive Multi-Segment Encoding for Dataset Watermarking
AMUSE: Adaptive Multi-Segment Encoding for Dataset Watermarking
Saeed Ranjbar Alvar
Mohammad Akbari
David Yue
Yong Zhang
26
2
0
08 Mar 2024
DeepEclipse: How to Break White-Box DNN-Watermarking Schemes
DeepEclipse: How to Break White-Box DNN-Watermarking Schemes
Alessandro Pegoraro
Carlotta Segna
Kavita Kumari
Ahmad-Reza Sadeghi
AAML
31
0
0
06 Mar 2024
Watermarking Makes Language Models Radioactive
Watermarking Makes Language Models Radioactive
Tom Sander
Pierre Fernandez
Alain Durmus
Matthijs Douze
Teddy Furon
WaLM
38
11
0
22 Feb 2024
Proving membership in LLM pretraining data via data watermarks
Proving membership in LLM pretraining data via data watermarks
Johnny Tian-Zheng Wei
Ryan Yixiang Wang
Robin Jia
WaLM
24
22
0
16 Feb 2024
Is my Data in your AI Model? Membership Inference Test with Application
  to Face Images
Is my Data in your AI Model? Membership Inference Test with Application to Face Images
Daniel DeAlcala
Aythami Morales
Gonzalo Mancera
Julian Fierrez
Ruben Tolosana
J. Ortega-Garcia
CVBM
26
7
0
14 Feb 2024
GraphGuard: Detecting and Counteracting Training Data Misuse in Graph
  Neural Networks
GraphGuard: Detecting and Counteracting Training Data Misuse in Graph Neural Networks
Bang Wu
He Zhang
Xiangwen Yang
Shuo Wang
Minhui Xue
Shirui Pan
Xingliang Yuan
59
6
0
13 Dec 2023
SoK: Unintended Interactions among Machine Learning Defenses and Risks
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
47
2
0
07 Dec 2023
Mendata: A Framework to Purify Manipulated Training Data
Mendata: A Framework to Purify Manipulated Training Data
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
17
0
0
03 Dec 2023
Adversarial Machine Learning for Social Good: Reframing the Adversary as
  an Ally
Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally
Shawqi Al-Maliki
Adnan Qayyum
Hassan Ali
M. Abdallah
Junaid Qadir
D. Hoang
Dusit Niyato
Ala I. Al-Fuqaha
AAML
26
3
0
05 Oct 2023
Hey That's Mine Imperceptible Watermarks are Preserved in Diffusion
  Generated Outputs
Hey That's Mine Imperceptible Watermarks are Preserved in Diffusion Generated Outputs
Luke Ditria
Tom Drummond
WIGM
21
2
0
22 Aug 2023
The "code'' of Ethics:A Holistic Audit of AI Code Generators
The "code'' of Ethics:A Holistic Audit of AI Code Generators
Wanlun Ma
Yiliao Song
Minhui Xue
Sheng Wen
Yang Xiang
22
4
0
22 May 2023
Identifying Appropriate Intellectual Property Protection Mechanisms for
  Machine Learning Models: A Systematization of Watermarking, Fingerprinting,
  Model Access, and Attacks
Identifying Appropriate Intellectual Property Protection Mechanisms for Machine Learning Models: A Systematization of Watermarking, Fingerprinting, Model Access, and Attacks
Isabell Lederer
Rudolf Mayer
Andreas Rauber
24
19
0
22 Apr 2023
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings
Asim Waheed
Vasisht Duddu
Nadarajah Asokan
35
9
0
17 Apr 2023
Did You Train on My Dataset? Towards Public Dataset Protection with
  Clean-Label Backdoor Watermarking
Did You Train on My Dataset? Towards Public Dataset Protection with Clean-Label Backdoor Watermarking
Ruixiang Tang
Qizhang Feng
Ninghao Liu
Fan Yang
Xia Hu
24
36
0
20 Mar 2023
On the Robustness of Dataset Inference
On the Robustness of Dataset Inference
S. Szyller
Rui Zhang
Jian Liu
Nadarajah Asokan
AAML
20
6
0
24 Oct 2022
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset
  Copyright Protection
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Yiming Li
Yang Bai
Yong Jiang
Yong-Liang Yang
Shutao Xia
Bo Li
AAML
45
97
0
27 Sep 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
18
12
0
29 Aug 2022
Conflicting Interactions Among Protection Mechanisms for Machine
  Learning Models
Conflicting Interactions Among Protection Mechanisms for Machine Learning Models
S. Szyller
Nadarajah Asokan
AAML
20
7
0
05 Jul 2022
Membership Inference via Backdooring
Membership Inference via Backdooring
Hongsheng Hu
Z. Salcic
Gillian Dobbie
Jinjun Chen
Lichao Sun
Xuyun Zhang
MIACV
25
30
0
10 Jun 2022
The Different Faces of AI Ethics Across the World: A
  Principle-Implementation Gap Analysis
The Different Faces of AI Ethics Across the World: A Principle-Implementation Gap Analysis
L. Tidjon
Foutse Khomh
22
7
0
12 May 2022
Holistic Adversarial Robustness of Deep Learning Models
Holistic Adversarial Robustness of Deep Learning Models
Pin-Yu Chen
Sijia Liu
AAML
41
16
0
15 Feb 2022
Understanding Rare Spurious Correlations in Neural Networks
Understanding Rare Spurious Correlations in Neural Networks
Yao-Yuan Yang
Chi-Ning Chou
Kamalika Chaudhuri
AAML
16
25
0
10 Feb 2022
SoK: Anti-Facial Recognition Technology
SoK: Anti-Facial Recognition Technology
Emily Wenger
Shawn Shan
Haitao Zheng
Ben Y. Zhao
PICV
32
13
0
08 Dec 2021
Lightweight machine unlearning in neural network
Lightweight machine unlearning in neural network
Kongyang Chen
Yiwen Wang
Yao Huang
MU
20
7
0
10 Nov 2021
10 Security and Privacy Problems in Large Foundation Models
10 Security and Privacy Problems in Large Foundation Models
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
11
7
0
28 Oct 2021
CoProtector: Protect Open-Source Code against Unauthorized Training
  Usage with Data Poisoning
CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Zhensu Sun
Xiaoning Du
Fu Song
Mingze Ni
Li Li
25
66
0
25 Oct 2021
A Framework for Deprecating Datasets: Standardizing Documentation,
  Identification, and Communication
A Framework for Deprecating Datasets: Standardizing Documentation, Identification, and Communication
A. Luccioni
Frances Corry
H. Sridharan
Mike Ananny
J. Schultz
Kate Crawford
48
29
0
18 Oct 2021
Federated Unlearning
Federated Unlearning
Gaoyang Liu
Xiaoqiang Ma
Yang Yang
Chen Wang
Jiangchuan Liu
MU
41
53
0
27 Dec 2020
Surgical Data Science -- from Concepts toward Clinical Translation
Surgical Data Science -- from Concepts toward Clinical Translation
Lena Maier-Hein
Matthias Eisenmann
Duygu Sarikaya
Keno Marz
Toby Collins
...
D. Teber
F. Uckert
Beat P. Müller-Stich
Pierre Jannin
Stefanie Speidel
AI4CE
25
223
0
30 Oct 2020
Auditing Differentially Private Machine Learning: How Private is Private
  SGD?
Auditing Differentially Private Machine Learning: How Private is Private SGD?
Matthew Jagielski
Jonathan R. Ullman
Alina Oprea
FedML
10
237
0
13 Jun 2020
MetaPoison: Practical General-purpose Clean-label Data Poisoning
MetaPoison: Practical General-purpose Clean-label Data Poisoning
W. R. Huang
Jonas Geiping
Liam H. Fowl
Gavin Taylor
Tom Goldstein
9
188
0
01 Apr 2020
Towards Probabilistic Verification of Machine Unlearning
Towards Probabilistic Verification of Machine Unlearning
David M. Sommer
Liwei Song
Sameer Wagh
Prateek Mittal
AAML
11
71
0
09 Mar 2020
Do CNNs Encode Data Augmentations?
Do CNNs Encode Data Augmentations?
Eddie Q. Yan
Yanping Huang
OOD
13
5
0
29 Feb 2020
1