ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.01667
10
3

A principled approach for generating adversarial images under non-smooth dissimilarity metrics

5 August 2019
Aram-Alexandre Pooladian
Chris Finlay
Tim Hoheisel
Adam M. Oberman
    AAML
ArXivPDFHTML
Abstract

Deep neural networks perform well on real world data but are prone to adversarial perturbations: small changes in the input easily lead to misclassification. In this work, we propose an attack methodology not only for cases where the perturbations are measured by ℓp\ell_pℓp​ norms, but in fact any adversarial dissimilarity metric with a closed proximal form. This includes, but is not limited to, ℓ1,ℓ2\ell_1, \ell_2ℓ1​,ℓ2​, and ℓ∞\ell_\inftyℓ∞​ perturbations; the ℓ0\ell_0ℓ0​ counting "norm" (i.e. true sparseness); and the total variation seminorm, which is a (non-ℓp\ell_pℓp​) convolutional dissimilarity measuring local pixel changes. Our approach is a natural extension of a recent adversarial attack method, and eliminates the differentiability requirement of the metric. We demonstrate our algorithm, ProxLogBarrier, on the MNIST, CIFAR10, and ImageNet-1k datasets. We consider undefended and defended models, and show that our algorithm easily transfers to various datasets. We observe that ProxLogBarrier outperforms a host of modern adversarial attacks specialized for the ℓ0\ell_0ℓ0​ case. Moreover, by altering images in the total variation seminorm, we shed light on a new class of perturbations that exploit neighboring pixel information.

View on arXiv
Comments on this paper