ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.12027
36
14

Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI

17 June 2024
Robert Honig
Javier Rando
Nicholas Carlini
Florian Tramèr
    WIGM
    AAML
ArXivPDFHTML
Abstract

Artists are increasingly concerned about advancements in image generation models that can closely replicate their unique artistic styles. In response, several protection tools against style mimicry have been developed that incorporate small adversarial perturbations into artworks published online. In this work, we evaluate the effectiveness of popular protections -- with millions of downloads -- and show they only provide a false sense of security. We find that low-effort and "off-the-shelf" techniques, such as image upscaling, are sufficient to create robust mimicry methods that significantly degrade existing protections. Through a user study, we demonstrate that all existing protections can be easily bypassed, leaving artists vulnerable to style mimicry. We caution that tools based on adversarial perturbations cannot reliably protect artists from the misuse of generative AI, and urge the development of alternative non-technological solutions.

View on arXiv
@article{hönig2025_2406.12027,
  title={ Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI },
  author={ Robert Hönig and Javier Rando and Nicholas Carlini and Florian Tramèr },
  journal={arXiv preprint arXiv:2406.12027},
  year={ 2025 }
}
Comments on this paper