ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.06674
371
0
v1v2v3v4v5 (latest)

Impact of Dataset Properties on Membership Inference Vulnerability of Deep Transfer Learning

7 February 2024
Marlon Tobaben
Hibiki Ito
Joonas Jälkö
Yuan He
Antti Honkela
    MIACV
ArXiv (abs)PDFHTMLGithub (1★)
Main:10 Pages
15 Figures
Bibliography:3 Pages
18 Tables
Appendix:32 Pages
Abstract

Membership inference attacks (MIAs) are used to test practical privacy of machine learning models. MIAs complement formal guarantees from differential privacy (DP) under a more realistic adversary model. We analyse MIA vulnerability of fine-tuned neural networks both empirically and theoretically, the latter using a simplified model of fine-tuning. We show that the vulnerability of non-DP models when measured as the attacker advantage at fixed false positive rate reduces according to a simple power law as the number of examples per class increases, even for the most vulnerable points, but the dataset size needed for adequate protection of the most vulnerable points is very large.

View on arXiv
Comments on this paper