ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.06575
15
0

Adversarial Examples from Dimensional Invariance

13 April 2023
Benjamin L. Badger
ArXivPDFHTML
Abstract

Adversarial examples have been found for various deep as well as shallow learning models, and have at various times been suggested to be either fixable model-specific bugs, or else inherent dataset feature, or both. We present theoretical and empirical results to show that adversarial examples are approximate discontinuities resulting from models that specify approximately bijective maps f:Rn→Rm;n≠mf: \Bbb R^n \to \Bbb R^m; n \neq mf:Rn→Rm;n=m over their inputs, and this discontinuity follows from the topological invariance of dimension.

View on arXiv
Comments on this paper