359

Diffence: Fencing Membership Privacy With Diffusion Models

Main:12 Pages
14 Figures
Bibliography:3 Pages
10 Tables
Appendix:3 Pages
Abstract

Deep learning models, while achieving remarkable performances, are vulnerable to membership inference attacks (MIAs). Although various defenses have been proposed, there is still substantial room for improvement in the privacy-utility trade-off. In this work, we introduce a novel defense framework against MIAs by leveraging generative models. The key intuition of our defense is to remove the differences between member and non-member inputs, which is exploited by MIAs, by re-generating input samples before feeding them to the target model. Therefore, our defense, called DIFFENCE, works pre inference, which is unlike prior defenses that are either training-time or post-inference time.

View on arXiv
Comments on this paper