ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.03335
  4. Cited By
Membership Inference Attacks and Defenses in Neural Network Pruning

Membership Inference Attacks and Defenses in Neural Network Pruning

7 February 2022
Xiaoyong Yuan
Lan Zhang
    AAML
ArXivPDFHTML

Papers citing "Membership Inference Attacks and Defenses in Neural Network Pruning"

31 / 31 papers shown
Title
A Unified and Scalable Membership Inference Method for Visual Self-supervised Encoder via Part-aware Capability
A Unified and Scalable Membership Inference Method for Visual Self-supervised Encoder via Part-aware Capability
Jie Zhu
Jirong Zha
Ding Li
Leye Wang
31
0
0
15 May 2025
Do Fairness Interventions Come at the Cost of Privacy: Evaluations for Binary Classifiers
Huan Tian
Guangsheng Zhang
Bo Liu
Tianqing Zhu
Ming Ding
Wanlei Zhou
50
0
0
08 Mar 2025
Trustworthy AI on Safety, Bias, and Privacy: A Survey
Trustworthy AI on Safety, Bias, and Privacy: A Survey
Xingli Fang
Jianwei Li
Varun Mulchandani
Jung-Eun Kim
45
0
0
11 Feb 2025
Membership Inference Attacks and Defenses in Federated Learning: A
  Survey
Membership Inference Attacks and Defenses in Federated Learning: A Survey
Li Bai
Haibo Hu
Qingqing Ye
Haoyang Li
Leixia Wang
Jianliang Xu
FedML
69
13
0
09 Dec 2024
TEESlice: Protecting Sensitive Neural Network Models in Trusted
  Execution Environments When Attackers have Pre-Trained Models
TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models
Ding Li
Ziqi Zhang
Mengyu Yao
Y. Cai
Yao Guo
Xiangqun Chen
FedML
37
2
0
15 Nov 2024
Edge Unlearning is Not "on Edge"! An Adaptive Exact Unlearning System on
  Resource-Constrained Devices
Edge Unlearning is Not "on Edge"! An Adaptive Exact Unlearning System on Resource-Constrained Devices
Xiaoyu Xia
Ziqi Wang
Ruoxi Sun
B. Liu
Ibrahim Khalil
Minhui Xue
MU
31
2
0
14 Oct 2024
Is Difficulty Calibration All We Need? Towards More Practical Membership
  Inference Attacks
Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks
Yu He
Boheng Li
Yao Wang
Mengda Yang
Juan Wang
Hongxin Hu
Xingyu Zhao
27
4
0
31 Aug 2024
Representation Magnitude has a Liability to Privacy Vulnerability
Representation Magnitude has a Liability to Privacy Vulnerability
Xingli Fang
Jung-Eun Kim
19
1
0
23 Jul 2024
Do Parameters Reveal More than Loss for Membership Inference?
Do Parameters Reveal More than Loss for Membership Inference?
Anshuman Suri
Xiao Zhang
David E. Evans
MIACV
MIALM
AAML
44
1
0
17 Jun 2024
Inference Attacks: A Taxonomy, Survey, and Promising Directions
Inference Attacks: A Taxonomy, Survey, and Promising Directions
Feng Wu
Lei Cui
Shaowen Yao
Shui Yu
39
2
0
04 Jun 2024
Investigating Calibration and Corruption Robustness of Post-hoc Pruned
  Perception CNNs: An Image Classification Benchmark Study
Investigating Calibration and Corruption Robustness of Post-hoc Pruned Perception CNNs: An Image Classification Benchmark Study
Pallavi Mitra
Gesina Schwalbe
Nadja Klein
AAML
34
1
0
31 May 2024
Center-Based Relaxed Learning Against Membership Inference Attacks
Center-Based Relaxed Learning Against Membership Inference Attacks
Xingli Fang
Jung-Eun Kim
41
2
0
26 Apr 2024
Inf2Guard: An Information-Theoretic Framework for Learning
  Privacy-Preserving Representations against Inference Attacks
Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Sayedeh Leila Noorbakhsh
Binghui Zhang
Yuan Hong
Binghui Wang
AAML
23
8
0
04 Mar 2024
Discriminative Adversarial Unlearning
Discriminative Adversarial Unlearning
Rohan Sharma
Shijie Zhou
Kaiyi Ji
Changyou Chen
MU
22
1
0
10 Feb 2024
Safety and Performance, Why Not Both? Bi-Objective Optimized Model
  Compression against Heterogeneous Attacks Toward AI Software Deployment
Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression against Heterogeneous Attacks Toward AI Software Deployment
Jie Zhu
Leye Wang
Xiao Han
Anmin Liu
Tao Xie
AAML
25
5
0
02 Jan 2024
GraphGuard: Detecting and Counteracting Training Data Misuse in Graph
  Neural Networks
GraphGuard: Detecting and Counteracting Training Data Misuse in Graph Neural Networks
Bang Wu
He Zhang
Xiangwen Yang
Shuo Wang
Minhui Xue
Shirui Pan
Xingliang Yuan
59
6
0
13 Dec 2023
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN
  Partition for On-Device ML
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML
Ziqi Zhang
Chen Gong
Yifeng Cai
Yuanyuan Yuan
Bingyan Liu
Ding Li
Yao Guo
Xiangqun Chen
FedML
37
16
0
11 Oct 2023
Source Inference Attacks: Beyond Membership Inference Attacks in
  Federated Learning
Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning
Hongsheng Hu
Xuyun Zhang
Z. Salcic
Lichao Sun
K. Choo
Gillian Dobbie
11
16
0
30 Sep 2023
Artificial Intelligence for Web 3.0: A Comprehensive Survey
Artificial Intelligence for Web 3.0: A Comprehensive Survey
Meng Shen
Zhehui Tan
Dusit Niyato
Yuzhi Liu
Jiawen Kang
Zehui Xiong
Liehuang Zhu
Wei Wang
Xuemin
X. Shen
24
13
0
17 Aug 2023
PATROL: Privacy-Oriented Pruning for Collaborative Inference Against
  Model Inversion Attacks
PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks
Shiwei Ding
Lan Zhang
Miao Pan
Xiaoyong Yuan
AAML
22
5
0
20 Jul 2023
Membership Inference Attacks on DNNs using Adversarial Perturbations
Membership Inference Attacks on DNNs using Adversarial Perturbations
Hassan Ali
Adnan Qayyum
Ala I. Al-Fuqaha
Junaid Qadir
AAML
24
3
0
11 Jul 2023
Sparsity in neural networks can improve their privacy
Antoine Gonon
Léon Zheng
Clément Lalanne
Quoc-Tung Le
Guillaume Lauga
Can Pouliquen
39
2
0
20 Apr 2023
Can sparsity improve the privacy of neural networks?
Can sparsity improve the privacy of neural networks?
Antoine Gonon
Léon Zheng
Clément Lalanne
Quoc-Tung Le
Guillaume Lauga
Can Pouliquen
18
0
0
11 Apr 2023
Safety and Performance, Why not Both? Bi-Objective Optimized Model
  Compression toward AI Software Deployment
Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment
Jie Zhu
Leye Wang
Xiao Han
15
9
0
11 Aug 2022
Fault Detection and Classification of Aerospace Sensors using a
  VGG16-based Deep Neural Network
Fault Detection and Classification of Aerospace Sensors using a VGG16-based Deep Neural Network
Zhongzhi Li
Yunmei Zhao
Jinyi Ma
J. Ai
Yiqun Dong
13
2
0
27 Jul 2022
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference
  Attacks
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Nuo Xu
Binghui Wang
Ran Ran
Wujie Wen
Parv Venkitasubramaniam
AAML
18
5
0
11 Jun 2022
A Blessing of Dimensionality in Membership Inference through
  Regularization
A Blessing of Dimensionality in Membership Inference through Regularization
Jasper Tan
Daniel LeJeune
Blake Mason
Hamid Javadi
Richard G. Baraniuk
26
18
0
27 May 2022
Membership Inference Attack on Graph Neural Networks
Membership Inference Attack on Graph Neural Networks
Iyiola E. Olatunji
Wolfgang Nejdl
Megha Khosla
AAML
38
97
0
17 Jan 2021
Systematic Evaluation of Privacy Risks of Machine Learning Models
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
196
358
0
24 Mar 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
185
1,027
0
06 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
224
382
0
05 Mar 2020
1