ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.01067
  4. Cited By
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online
  Learning

Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning

1 April 2019
A. Salem
Apratim Bhattacharyya
Michael Backes
Mario Fritz
Yang Zhang
    FedML
    AAML
    MIACV
ArXivPDFHTML

Papers citing "Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning"

40 / 40 papers shown
Title
PatientDx: Merging Large Language Models for Protecting Data-Privacy in Healthcare
PatientDx: Merging Large Language Models for Protecting Data-Privacy in Healthcare
José G. Moreno
Jesus Lovon
M'Rick Robin-Charlet
Christine Damase-Michel
L. Tamine
MoMe
LM&MA
53
0
0
24 Apr 2025
Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems
Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems
Song Xia
Yi Yu
Wenhan Yang
Meiwen Ding
Zhuo Chen
Lingyu Duan
Alex C. Kot
Xudong Jiang
56
2
0
01 Mar 2025
CRFU: Compressive Representation Forgetting Against Privacy Leakage on Machine Unlearning
Weiqi Wang
Chenhan Zhang
Zhiyi Tian
Shushu Liu
Shui Yu
MU
42
0
0
27 Feb 2025
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
Jiadong Lou
Xu Yuan
Rui Zhang
Xingliang Yuan
Neil Gong
N. Tzeng
AAML
42
1
0
19 Jan 2025
Range Membership Inference Attacks
Range Membership Inference Attacks
Jiashu Tao
Reza Shokri
42
1
0
09 Aug 2024
Link Stealing Attacks Against Inductive Graph Neural Networks
Link Stealing Attacks Against Inductive Graph Neural Networks
Yixin Wu
Xinlei He
Pascal Berrang
Mathias Humbert
Michael Backes
Neil Zhenqiang Gong
Yang Zhang
34
2
0
09 May 2024
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
Chaoyu Zhang
Shaoyu Li
AILaw
48
3
0
25 Feb 2024
Learning Human Action Recognition Representations Without Real Humans
Learning Human Action Recognition Representations Without Real Humans
Howard Zhong
Samarth Mishra
Donghyun Kim
SouYoung Jin
Rameswar Panda
Hildegard Kuehne
Leonid Karlinsky
Venkatesh Saligrama
Aude Oliva
Rogerio Feris
24
3
0
10 Nov 2023
Avoid Adversarial Adaption in Federated Learning by Multi-Metric
  Investigations
Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations
T. Krauß
Alexandra Dmitrienko
AAML
16
4
0
06 Jun 2023
(Local) Differential Privacy has NO Disparate Impact on Fairness
(Local) Differential Privacy has NO Disparate Impact on Fairness
Héber H. Arcolezi
K. Makhlouf
C. Palamidessi
32
6
0
25 Apr 2023
Introducing Model Inversion Attacks on Automatic Speaker Recognition
Introducing Model Inversion Attacks on Automatic Speaker Recognition
Karla Pizzi
Franziska Boenisch
U. Sahin
Konstantin Böttinger
18
3
0
09 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
31
27
0
03 Jan 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
31
75
0
29 Dec 2022
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference
  Privacy in Machine Learning
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
A. Salem
Giovanni Cherubin
David E. Evans
Boris Köpf
Andrew J. Paverd
Anshuman Suri
Shruti Tople
Santiago Zanella Béguelin
41
35
0
21 Dec 2022
GRAIMATTER Green Paper: Recommendations for disclosure control of
  trained Machine Learning (ML) models from Trusted Research Environments
  (TREs)
GRAIMATTER Green Paper: Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)
E. Jefferson
J. Liley
Maeve Malone
S. Reel
Alba Crespi-Boixader
...
Christian Cole
F. Ritchie
A. Daly
Simon Rogers
Jim Q. Smith
24
7
0
03 Nov 2022
Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Julien Ferry
Ulrich Aivodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
AAML
29
14
0
02 Sep 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
18
12
0
29 Aug 2022
Measuring Forgetting of Memorized Training Examples
Measuring Forgetting of Memorized Training Examples
Matthew Jagielski
Om Thakkar
Florian Tramèr
Daphne Ippolito
Katherine Lee
...
Eric Wallace
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Chiyuan Zhang
TDI
50
102
0
30 Jun 2022
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference
  Attacks
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Nuo Xu
Binghui Wang
Ran Ran
Wujie Wen
Parv Venkitasubramaniam
AAML
18
5
0
11 Jun 2022
How to Combine Membership-Inference Attacks on Multiple Updated Models
How to Combine Membership-Inference Attacks on Multiple Updated Models
Matthew Jagielski
Stanley Wu
Alina Oprea
Jonathan R. Ullman
Roxana Geambasu
21
10
0
12 May 2022
Do Language Models Plagiarize?
Do Language Models Plagiarize?
Jooyoung Lee
Thai Le
Jinghui Chen
Dongwon Lee
25
73
0
15 Mar 2022
Quantifying Privacy Risks of Masked Language Models Using Membership
  Inference Attacks
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Fatemehsadat Mireshghallah
Kartik Goyal
Archit Uniyal
Taylor Berg-Kirkpatrick
Reza Shokri
MIALM
30
151
0
08 Mar 2022
Deletion Inference, Reconstruction, and Compliance in Machine
  (Un)Learning
Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Ji Gao
Sanjam Garg
Mohammad Mahmoody
Prashant Nalini Vasudevan
MIACV
AAML
19
22
0
07 Feb 2022
Reconstructing Training Data with Informed Adversaries
Reconstructing Training Data with Informed Adversaries
Borja Balle
Giovanni Cherubin
Jamie Hayes
MIACV
AAML
30
158
0
13 Jan 2022
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through
  Deep Model Inspection
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection
Phillip Rieger
T. D. Nguyen
Markus Miettinen
A. Sadeghi
FedML
AAML
17
150
0
03 Jan 2022
Membership Inference Attacks From First Principles
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
MIACV
MIALM
22
639
0
07 Dec 2021
Property Inference Attacks Against GANs
Property Inference Attacks Against GANs
Junhao Zhou
Yufei Chen
Chao Shen
Yang Zhang
AAML
MIACV
28
52
0
15 Nov 2021
Confidential Machine Learning Computation in Untrusted Environments: A
  Systems Security Perspective
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
Kha Dinh Duy
Taehyun Noh
Siwon Huh
Hojoon Lee
56
9
0
05 Nov 2021
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label
  Inference Attacks Against Split Learning
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning
Ege Erdogan
Alptekin Kupcu
A. E. Cicek
FedML
MIACV
35
77
0
20 Aug 2021
Survey: Leakage and Privacy at Inference Time
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
17
71
0
04 Jul 2021
Non-Transferable Learning: A New Approach for Model Ownership
  Verification and Applicability Authorization
Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization
Lixu Wang
Shichao Xu
Ruiqi Xu
Xiao Wang
Qi Zhu
AAML
11
45
0
13 Jun 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
21
82
0
26 Apr 2021
SoK: Privacy-Preserving Collaborative Tree-based Model Learning
SoK: Privacy-Preserving Collaborative Tree-based Model Learning
Sylvain Chatel
Apostolos Pyrgelis
J. Troncoso-Pastoriza
Jean-Pierre Hubaux
15
14
0
16 Mar 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
11
51
0
08 Feb 2021
Responsible Disclosure of Generative Models Using Scalable
  Fingerprinting
Responsible Disclosure of Generative Models Using Scalable Fingerprinting
Ning Yu
Vladislav Skripniuk
Dingfan Chen
Larry S. Davis
Mario Fritz
WIGM
37
89
0
16 Dec 2020
Knowledge-Enriched Distributional Model Inversion Attacks
Knowledge-Enriched Distributional Model Inversion Attacks
Si-An Chen
Mostafa Kahla
R. Jia
Guo-Jun Qi
16
93
0
08 Oct 2020
Improving Robustness to Model Inversion Attacks via Mutual Information
  Regularization
Improving Robustness to Model Inversion Attacks via Mutual Information Regularization
Tianhao Wang
Yuheng Zhang
R. Jia
19
74
0
11 Sep 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
27
213
0
15 Jul 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
196
358
0
24 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
16
269
0
07 Mar 2020
1