ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.13982
108
4

Hindering Adversarial Attacks with Implicit Neural Representations

22 October 2022
Andrei A. Rusu
D. A. Calian
Sven Gowal
R. Hadsell
    AAML
ArXivPDFHTML
Abstract

We introduce the Lossy Implicit Network Activation Coding (LINAC) defence, an input transformation which successfully hinders several common adversarial attacks on CIFAR-101010 classifiers for perturbations up to ϵ=8/255\epsilon = 8/255ϵ=8/255 in L∞L_\inftyL∞​ norm and ϵ=0.5\epsilon = 0.5ϵ=0.5 in L2L_2L2​ norm. Implicit neural representations are used to approximately encode pixel colour intensities in 2D2\text{D}2D images such that classifiers trained on transformed data appear to have robustness to small perturbations without adversarial training or large drops in performance. The seed of the random number generator used to initialise and train the implicit neural representation turns out to be necessary information for stronger generic attacks, suggesting its role as a private key. We devise a Parametric Bypass Approximation (PBA) attack strategy for key-based defences, which successfully invalidates an existing method in this category. Interestingly, our LINAC defence also hinders some transfer and adaptive attacks, including our novel PBA strategy. Our results emphasise the importance of a broad range of customised attacks despite apparent robustness according to standard evaluations. LINAC source code and parameters of defended classifier evaluated throughout this submission are available: https://github.com/deepmind/linac

View on arXiv
Comments on this paper