122

Model Inversion Attacks Meet Cryptographic Fuzzy Extractors

Main:13 Pages
5 Figures
Bibliography:2 Pages
13 Tables
Appendix:6 Pages
Abstract

Model inversion attacks pose an open challenge to privacy-sensitive applications that use machine learning (ML) models. For example, face authentication systems use modern ML models to compute embedding vectors from face images of the enrolled users and store them. If leaked, inversion attacks can accurately reconstruct user faces from the leaked vectors. There is no systematic characterization of properties needed in an ideal defense against model inversion, even for the canonical example application of a face authentication system susceptible to data breaches, despite a decade of best-effort solutions.

View on arXiv
Comments on this paper