ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.08757
404
17
v1v2v3 (latest)

Siamese Generative Adversarial Privatizer for Biometric Data

Asian Conference on Computer Vision (ACCV), 2018
23 April 2018
Witold Oleszkiewicz
Peter Kairouz
Karol J. Piczak
Ram Rajagopal
Tomasz Trzciñski
Ram Rajagopal
    AAML
ArXiv (abs)PDFHTML
Abstract

State-of-the-art machine learning algorithms can be fooled by carefully crafted adversarial examples. As such, adversarial examples present a concrete problem in AI safety. In this work we turn the tables and ask the following question: can we harness the power of adversarial examples to prevent malicious adversaries from learning sensitive information while allowing non-malicious entities to fully benefit from the utility of released datasets? To answer this question, we propose a novel Siamese Generative Adversarial Privatizer that exploits the properties of a Siamese neural network in order to find discriminative features that convey private information. When coupled with a generative adversarial network, our model is able to correctly locate and disguise sensitive information, while minimal distortion constraint prohibits the network from reducing the utility of the resulting dataset. Our method shows promising results on a biometric dataset of fingerprints.

View on arXiv
Comments on this paper