ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.07690
  4. Cited By
A Boundary Tilting Persepective on the Phenomenon of Adversarial
  Examples

A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples

27 August 2016
T. Tanay
Lewis D. Griffin
    AAML
ArXivPDFHTML

Papers citing "A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples"

43 / 43 papers shown
Title
Exploiting the Layered Intrinsic Dimensionality of Deep Models for
  Practical Adversarial Training
Exploiting the Layered Intrinsic Dimensionality of Deep Models for Practical Adversarial Training
Enes Altinisik
Safa Messaoud
H. Sencar
Hassan Sajjad
Sanjay Chawla
AAML
48
0
0
27 May 2024
Training Image Derivatives: Increased Accuracy and Universal Robustness
Training Image Derivatives: Increased Accuracy and Universal Robustness
V. Avrutskiy
46
0
0
21 Oct 2023
Assessing Privacy Risks in Language Models: A Case Study on
  Summarization Tasks
Assessing Privacy Risks in Language Models: A Case Study on Summarization Tasks
Ruixiang Tang
Gord Lueck
Rodolfo Quispe
Huseyin A. Inan
Janardhan Kulkarni
Xia Hu
26
6
0
20 Oct 2023
Improve Video Representation with Temporal Adversarial Augmentation
Improve Video Representation with Temporal Adversarial Augmentation
Jinhao Duan
Quanfu Fan
Hao-Ran Cheng
Xiaoshuang Shi
Kaidi Xu
AAML
AI4TS
ViT
25
2
0
28 Apr 2023
Identifying Adversarially Attackable and Robust Samples
Identifying Adversarially Attackable and Robust Samples
Vyas Raina
Mark J. F. Gales
AAML
25
3
0
30 Jan 2023
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
Nabeel Hingun
Chawin Sitawarin
Jerry Li
David A. Wagner
AAML
31
14
0
12 Dec 2022
Textual Manifold-based Defense Against Natural Language Adversarial
  Examples
Textual Manifold-based Defense Against Natural Language Adversarial Examples
D. M. Nguyen
Anh Tuan Luu
AAML
19
17
0
05 Nov 2022
A Manifold View of Adversarial Risk
A Manifold View of Adversarial Risk
Wen-jun Zhang
Yikai Zhang
Xiaoling Hu
Mayank Goswami
Chao Chen
Dimitris N. Metaxas
AAML
19
6
0
24 Mar 2022
Evaluating natural language processing models with generalization
  metrics that do not need access to any training or testing data
Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data
Yaoqing Yang
Ryan Theisen
Liam Hodgkinson
Joseph E. Gonzalez
Kannan Ramchandran
Charles H. Martin
Michael W. Mahoney
86
17
0
06 Feb 2022
Image classifiers can not be made robust to small perturbations
Image classifiers can not be made robust to small perturbations
Zheng Dai
David K Gifford
VLM
AAML
16
1
0
07 Dec 2021
Sparse Adversarial Video Attacks with Spatial Transformations
Sparse Adversarial Video Attacks with Spatial Transformations
Ronghui Mu
Wenjie Ruan
Leandro Soriano Marcolino
Q. Ni
AAML
25
18
0
10 Nov 2021
A novel network training approach for open set image recognition
A novel network training approach for open set image recognition
Md Tahmid Hossaina
S. Teng
Guojun Lu
Ferdous Sohel
21
0
0
27 Sep 2021
Advances in adversarial attacks and defenses in computer vision: A
  survey
Advances in adversarial attacks and defenses in computer vision: A survey
Naveed Akhtar
Ajmal Saeed Mian
Navid Kardan
M. Shah
AAML
26
235
0
01 Aug 2021
Scale-invariant scale-channel networks: Deep networks that generalise to
  previously unseen scales
Scale-invariant scale-channel networks: Deep networks that generalise to previously unseen scales
Ylva Jansson
T. Lindeberg
11
23
0
11 Jun 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
21
57
0
25 Feb 2021
Visually Imperceptible Adversarial Patch Attacks on Digital Images
Visually Imperceptible Adversarial Patch Attacks on Digital Images
Yaguan Qian
Jiamin Wang
Bin Wang
Xiang Ling
Zhaoquan Gu
Chunming Wu
Wassim Swaileh
AAML
22
2
0
02 Dec 2020
Characterizing and Taming Model Instability Across Edge Devices
Characterizing and Taming Model Instability Across Edge Devices
Eyal Cidon
Evgenya Pergament
Zain Asgar
Asaf Cidon
Sachin Katti
14
7
0
18 Oct 2020
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
  and Learning
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning
Hongjun Wang
Guanbin Li
Xiaobai Liu
Liang Lin
GAN
AAML
16
22
0
15 Oct 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
33
62
0
11 Sep 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
25
73
0
07 Aug 2020
Boundary thickness and robustness in learning models
Boundary thickness and robustness in learning models
Yaoqing Yang
Rekha Khanna
Yaodong Yu
A. Gholami
Kurt Keutzer
Joseph E. Gonzalez
K. Ramchandran
Michael W. Mahoney
OOD
10
37
0
09 Jul 2020
Feature Purification: How Adversarial Training Performs Robust Deep
  Learning
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
27
147
0
20 May 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
58
63
0
02 Mar 2020
Softmax-based Classification is k-means Clustering: Formal Proof,
  Consequences for Adversarial Attacks, and Improvement through Centroid Based
  Tailoring
Softmax-based Classification is k-means Clustering: Formal Proof, Consequences for Adversarial Attacks, and Improvement through Centroid Based Tailoring
Sibylle Hess
W. Duivesteijn
D. Mocanu
20
12
0
07 Jan 2020
Impact of Low-bitwidth Quantization on the Adversarial Robustness for
  Embedded Neural Networks
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
24
18
0
27 Sep 2019
Defense Against Adversarial Attacks Using Feature Scattering-based
  Adversarial Training
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Haichao Zhang
Jianyu Wang
AAML
23
230
0
24 Jul 2019
PhysGAN: Generating Physical-World-Resilient Adversarial Examples for
  Autonomous Driving
PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving
Zelun Kong
Junfeng Guo
Ang Li
Cong Liu
AAML
36
126
0
09 Jul 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
22
101
0
08 Jun 2019
Provably scale-covariant continuous hierarchical networks based on
  scale-normalized differential expressions coupled in cascade
Provably scale-covariant continuous hierarchical networks based on scale-normalized differential expressions coupled in cascade
T. Lindeberg
27
19
0
29 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
16
5
0
19 May 2019
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial
  Perturbations
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations
Saeid Asgari Taghanaki
Kumar Abhishek
Shekoofeh Azizi
Ghassan Hamarneh
AAML
31
40
0
03 Mar 2019
An Information-Theoretic Explanation for the Adversarial Fragility of AI
  Classifiers
An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Hui Xie
Jirong Yi
Weiyu Xu
R. Mudumbai
AAML
21
3
0
27 Jan 2019
Defending Against Universal Perturbations With Shared Adversarial
  Training
Defending Against Universal Perturbations With Shared Adversarial Training
Chaithanya Kumar Mummadi
Thomas Brox
J. H. Metzen
AAML
18
60
0
10 Dec 2018
Robustness via curvature regularization, and vice versa
Robustness via curvature regularization, and vice versa
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
J. Uesato
P. Frossard
AAML
21
318
0
23 Nov 2018
With Friends Like These, Who Needs Adversaries?
With Friends Like These, Who Needs Adversaries?
Saumya Jetley
Nicholas A. Lord
Philip H. S. Torr
AAML
15
70
0
11 Jul 2018
Understanding Measures of Uncertainty for Adversarial Example Detection
Understanding Measures of Uncertainty for Adversarial Example Detection
Lewis Smith
Y. Gal
UQCV
16
358
0
22 Mar 2018
Adversarial Defense based on Structure-to-Signal Autoencoders
Adversarial Defense based on Structure-to-Signal Autoencoders
Joachim Folz
Sebastián M. Palacio
Jörn Hees
Damian Borth
Andreas Dengel
AAML
26
31
0
21 Mar 2018
Adversarial vulnerability for any classifier
Adversarial vulnerability for any classifier
Alhussein Fawzi
Hamza Fawzi
Omar Fawzi
AAML
22
248
0
23 Feb 2018
Characterizing Adversarial Subspaces Using Local Intrinsic
  Dimensionality
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
Xingjun Ma
Bo-wen Li
Yisen Wang
S. Erfani
S. Wijewickrema
Grant Schoenebeck
D. Song
Michael E. Houle
James Bailey
AAML
26
726
0
08 Jan 2018
High Dimensional Spaces, Deep Learning and Adversarial Examples
High Dimensional Spaces, Deep Learning and Adversarial Examples
S. Dube
20
29
0
02 Jan 2018
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the
  iCub Humanoid
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
Marco Melis
Ambra Demontis
Battista Biggio
Gavin Brown
Giorgio Fumera
Fabio Roli
AAML
19
98
0
23 Aug 2017
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Yizhen Wang
S. Jha
Kamalika Chaudhuri
AAML
11
154
0
13 Jun 2017
Robustness of classifiers to universal perturbations: a geometric
  perspective
Robustness of classifiers to universal perturbations: a geometric perspective
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
Stefano Soatto
AAML
24
118
0
26 May 2017
1