ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.07646
  4. Cited By
A Survey of Privacy Attacks in Machine Learning

A Survey of Privacy Attacks in Machine Learning

15 July 2020
M. Rigaki
Sebastian Garcia
    PILM
    AAML
ArXivPDFHTML

Papers citing "A Survey of Privacy Attacks in Machine Learning"

50 / 105 papers shown
Title
Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of
  LLMs through a Global Scale Prompt Hacking Competition
Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition
Sander Schulhoff
Jeremy Pinto
Anaum Khan
Louis-Franccois Bouchard
Chenglei Si
Svetlina Anati
Valen Tagliabue
Anson Liu Kost
Christopher Carnahan
Jordan L. Boyd-Graber
SILM
21
41
0
24 Oct 2023
Dynamically Weighted Federated k-Means
Dynamically Weighted Federated k-Means
Patrick Holzer
Tania Jacob
Shubham Kavane
FedML
9
1
0
23 Oct 2023
When Machine Learning Models Leak: An Exploration of Synthetic Training
  Data
When Machine Learning Models Leak: An Exploration of Synthetic Training Data
Manel Slokom
Peter-Paul de Wolf
Martha Larson
MIACV
22
1
0
12 Oct 2023
A Survey of Data Security: Practices from Cybersecurity and Challenges
  of Machine Learning
A Survey of Data Security: Practices from Cybersecurity and Challenges of Machine Learning
Padmaksha Roy
Jaganmohan Chandrasekaran
Erin Lanus
Laura J. Freeman
Jeremy Werner
10
3
0
06 Oct 2023
Recent Advances of Differential Privacy in Centralized Deep Learning: A
  Systematic Survey
Recent Advances of Differential Privacy in Centralized Deep Learning: A Systematic Survey
Lea Demelius
Roman Kern
Andreas Trügler
SyDa
FedML
24
6
0
28 Sep 2023
Identifying and Mitigating Privacy Risks Stemming from Language Models:
  A Survey
Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey
Victoria Smith
Ali Shahin Shamsabadi
Carolyn Ashurst
Adrian Weller
PILM
27
24
0
27 Sep 2023
Large Language Model Alignment: A Survey
Large Language Model Alignment: A Survey
Tianhao Shen
Renren Jin
Yufei Huang
Chuang Liu
Weilong Dong
Zishan Guo
Xinwei Wu
Yan Liu
Deyi Xiong
LM&MA
14
169
0
26 Sep 2023
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
  Applications
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications
Yi Zhang
Yuying Zhao
Zhaoqing Li
Xueqi Cheng
Yu-Chiang Frank Wang
Olivera Kotevska
Philip S. Yu
Tyler Derr
18
9
0
31 Aug 2023
Probabilistic Dataset Reconstruction from Interpretable Models
Probabilistic Dataset Reconstruction from Interpretable Models
Julien Ferry
Ulrich Aivodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
13
5
0
29 Aug 2023
ProPILE: Probing Privacy Leakage in Large Language Models
ProPILE: Probing Privacy Leakage in Large Language Models
Siwon Kim
Sangdoo Yun
Hwaran Lee
Martin Gubri
Sungroh Yoon
Seong Joon Oh
PILM
370
93
3
04 Jul 2023
Your Room is not Private: Gradient Inversion Attack on Reinforcement
  Learning
Your Room is not Private: Gradient Inversion Attack on Reinforcement Learning
Miao Li
Wenhao Ding
Ding Zhao
AAML
18
0
0
15 Jun 2023
Gaussian Membership Inference Privacy
Gaussian Membership Inference Privacy
Tobias Leemann
Martin Pawelczyk
Gjergji Kasneci
13
14
0
12 Jun 2023
Tricking LLMs into Disobedience: Formalizing, Analyzing, and Detecting
  Jailbreaks
Tricking LLMs into Disobedience: Formalizing, Analyzing, and Detecting Jailbreaks
Abhinav Rao
S. Vashistha
Atharva Naik
Somak Aditya
Monojit Choudhury
17
17
0
24 May 2023
Sparsity in neural networks can improve their privacy
Antoine Gonon
Léon Zheng
Clément Lalanne
Quoc-Tung Le
Guillaume Lauga
Can Pouliquen
21
2
0
20 Apr 2023
Reinforcement Learning-Based Black-Box Model Inversion Attacks
Reinforcement Learning-Based Black-Box Model Inversion Attacks
Gyojin Han
Jaehyun Choi
Haeil Lee
Junmo Kim
MIACV
14
34
0
10 Apr 2023
Beyond Privacy: Navigating the Opportunities and Challenges of Synthetic
  Data
Beyond Privacy: Navigating the Opportunities and Challenges of Synthetic Data
B. V. Breugel
M. Schaar
17
26
0
07 Apr 2023
Data Privacy Preservation on the Internet of Things
Data Privacy Preservation on the Internet of Things
Jaydip Sen
S. Dasgupta
8
2
0
01 Apr 2023
Membership Inference Attacks against Synthetic Data through Overfitting
  Detection
Membership Inference Attacks against Synthetic Data through Overfitting Detection
B. V. Breugel
Hao Sun
Zhaozhi Qian
M. Schaar
8
44
0
24 Feb 2023
Digital Privacy Under Attack: Challenges and Enablers
Digital Privacy Under Attack: Challenges and Enablers
Baobao Song
Mengyue Deng
Shiva Raj Pokhrel
Qiujun Lan
R. Doss
Gang Li
AAML
26
3
0
18 Feb 2023
Understanding Reconstruction Attacks with the Neural Tangent Kernel and
  Dataset Distillation
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation
Noel Loo
Ramin Hasani
Mathias Lechner
Alexander Amini
Daniela Rus
DD
22
5
0
02 Feb 2023
Threats, Vulnerabilities, and Controls of Machine Learning Based
  Systems: A Survey and Taxonomy
Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy
Yusuke Kawamoto
Kazumasa Miyake
K. Konishi
Y. Oiwa
11
2
0
18 Jan 2023
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference
  Privacy in Machine Learning
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
A. Salem
Giovanni Cherubin
David E. Evans
Boris Köpf
Andrew J. Paverd
Anshuman Suri
Shruti Tople
Santiago Zanella Béguelin
20
35
0
21 Dec 2022
Memorization of Named Entities in Fine-tuned BERT Models
Memorization of Named Entities in Fine-tuned BERT Models
Andor Diera
N. Lell
Aygul Garifullina
A. Scherp
10
0
0
07 Dec 2022
LDL: A Defense for Label-Based Membership Inference Attacks
LDL: A Defense for Label-Based Membership Inference Attacks
Arezoo Rajabi
D. Sahabandu
Luyao Niu
Bhaskar Ramasubramanian
Radha Poovendran
AAML
17
3
0
03 Dec 2022
PriMask: Cascadable and Collusion-Resilient Data Masking for Mobile
  Cloud Inference
PriMask: Cascadable and Collusion-Resilient Data Masking for Mobile Cloud Inference
Linshan Jiang
Qun Song
Rui Tan
Mo Li
11
4
0
12 Nov 2022
On the Privacy Risks of Algorithmic Recourse
On the Privacy Risks of Algorithmic Recourse
Martin Pawelczyk
Himabindu Lakkaraju
Seth Neel
11
29
0
10 Nov 2022
Inferring Class Label Distribution of Training Data from Classifiers: An
  Accuracy-Augmented Meta-Classifier Attack
Inferring Class Label Distribution of Training Data from Classifiers: An Accuracy-Augmented Meta-Classifier Attack
Raksha Ramakrishna
Gyorgy Dán
6
2
0
08 Nov 2022
The privacy issue of counterfactual explanations: explanation linkage
  attacks
The privacy issue of counterfactual explanations: explanation linkage attacks
S. Goethals
Kenneth Sörensen
David Martens
11
28
0
21 Oct 2022
Sketching for First Order Method: Efficient Algorithm for Low-Bandwidth
  Channel and Vulnerability
Sketching for First Order Method: Efficient Algorithm for Low-Bandwidth Channel and Vulnerability
Zhao-quan Song
Yitan Wang
Zheng Yu
Licheng Zhang
FedML
18
28
0
15 Oct 2022
Towards Lightweight Black-Box Attacks against Deep Neural Networks
Towards Lightweight Black-Box Attacks against Deep Neural Networks
Chenghao Sun
Yonggang Zhang
Chaoqun Wan
Qizhou Wang
Ya Li
Tongliang Liu
Bo Han
Xinmei Tian
AAML
MLAU
6
5
0
29 Sep 2022
A Comprehensive Survey on Trustworthy Recommender Systems
A Comprehensive Survey on Trustworthy Recommender Systems
Wenqi Fan
Xiangyu Zhao
Xiao Chen
Jingran Su
Jingtong Gao
...
Qidong Liu
Yiqi Wang
Hanfeng Xu
Lei Chen
Qing Li
FaML
17
46
0
21 Sep 2022
Model Inversion Attacks against Graph Neural Networks
Model Inversion Attacks against Graph Neural Networks
Zaixin Zhang
Qi Liu
Zhenya Huang
Hao Wang
Cheekong Lee
Enhong
AAML
13
35
0
16 Sep 2022
Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Julien Ferry
Ulrich Aivodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
AAML
16
14
0
02 Sep 2022
Machine Learning with Confidential Computing: A Systematization of
  Knowledge
Machine Learning with Confidential Computing: A Systematization of Knowledge
Fan Mo
Zahra Tarkhani
Hamed Haddadi
19
7
0
22 Aug 2022
Differentially Private Counterfactuals via Functional Mechanism
Differentially Private Counterfactuals via Functional Mechanism
Fan Yang
Qizhang Feng
Kaixiong Zhou
Jiahao Chen
Xia Hu
9
7
0
04 Aug 2022
Bilateral Dependency Optimization: Defending Against Model-inversion
  Attacks
Bilateral Dependency Optimization: Defending Against Model-inversion Attacks
Xiong Peng
Feng Liu
Jingfeng Zhang
Long Lan
Junjie Ye
Tongliang Liu
Bo Han
6
33
0
11 Jun 2022
Gradient Obfuscation Gives a False Sense of Security in Federated
  Learning
Gradient Obfuscation Gives a False Sense of Security in Federated Learning
Kai Yue
Richeng Jin
Chau-Wai Wong
D. Baron
H. Dai
FedML
11
44
0
08 Jun 2022
FaceMAE: Privacy-Preserving Face Recognition via Masked Autoencoders
FaceMAE: Privacy-Preserving Face Recognition via Masked Autoencoders
K. Wang
Bo-Lu Zhao
Xiangyu Peng
Zheng Hua Zhu
Jiankang Deng
Xinchao Wang
Hakan Bilen
Yang You
PICV
38
11
0
23 May 2022
Lessons Learned: Defending Against Property Inference Attacks
Lessons Learned: Defending Against Property Inference Attacks
Joshua Stock
Jens Wettlaufer
Daniel Demmler
Hannes Federrath
AAML
8
1
0
18 May 2022
Privacy Preserving Machine Learning for Electric Vehicles: A Survey
Privacy Preserving Machine Learning for Electric Vehicles: A Survey
Abdul Rahman Sani
M. Hassan
Jinjun Chen
22
10
0
17 May 2022
Collaborative Drug Discovery: Inference-level Data Protection
  Perspective
Collaborative Drug Discovery: Inference-level Data Protection Perspective
Balázs Pejó
Mina Remeli
Adam Arany
M. Galtier
G. Ács
10
3
0
13 May 2022
Robustness Testing of Data and Knowledge Driven Anomaly Detection in
  Cyber-Physical Systems
Robustness Testing of Data and Knowledge Driven Anomaly Detection in Cyber-Physical Systems
Xugui Zhou
Maxfield Kouzel
H. Alemzadeh
OOD
AAML
8
12
0
20 Apr 2022
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
  Robustness, Fairness, and Explainability
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability
Enyan Dai
Tianxiang Zhao
Huaisheng Zhu
Jun Xu
Zhimeng Guo
Hui Liu
Jiliang Tang
Suhang Wang
14
123
0
18 Apr 2022
Label-Only Model Inversion Attacks via Boundary Repulsion
Label-Only Model Inversion Attacks via Boundary Repulsion
Mostafa Kahla
Si-An Chen
H. Just
R. Jia
16
74
0
03 Mar 2022
Privacy-aware Early Detection of COVID-19 through Adversarial Training
Privacy-aware Early Detection of COVID-19 through Adversarial Training
Omid Rohanian
Samaneh Kouchaki
A. Soltan
Jenny Yang
Morteza Rohanian
Yang Yang
David A. Clifton
AAML
OOD
9
6
0
09 Jan 2022
Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups
A. Wainakh
Ephraim Zimmer
Sandeep Subedi
Jens Keim
Tim Grube
Shankar Karuppayah
Alejandro Sánchez Guinea
Max Mühlhäuser
6
9
0
05 Nov 2021
On the Privacy Risks of Deploying Recurrent Neural Networks in Machine
  Learning Models
On the Privacy Risks of Deploying Recurrent Neural Networks in Machine Learning Models
Yunhao Yang
Parham Gohari
Ufuk Topcu
AAML
17
3
0
06 Oct 2021
Membership Inference Attacks Against Temporally Correlated Data in Deep
  Reinforcement Learning
Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
Maziar Gomrokchi
Susan Amin
Hossein Aboutalebi
Alexander Wong
Doina Precup
MIACV
AAML
16
3
0
08 Sep 2021
FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated
  Learning
FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning
Nam Hyeon-Woo
Moon Ye-Bin
Tae-Hyun Oh
FedML
6
113
0
13 Aug 2021
Trustworthy AI: A Computational Perspective
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
96
193
0
12 Jul 2021
Previous
123
Next