ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.02057
  4. Cited By
Interpreting Adversarial Examples by Activation Promotion and
  Suppression

Interpreting Adversarial Examples by Activation Promotion and Suppression

3 April 2019
Kaidi Xu
Sijia Liu
Gaoyuan Zhang
Mengshu Sun
Pu Zhao
Quanfu Fan
Chuang Gan
X. Lin
    AAML
    FAtt
ArXivPDFHTML

Papers citing "Interpreting Adversarial Examples by Activation Promotion and Suppression"

11 / 11 papers shown
Title
Improving Adversarial Robustness via Decoupled Visual Representation
  Masking
Improving Adversarial Robustness via Decoupled Visual Representation Masking
Decheng Liu
Tao Chen
Chunlei Peng
Nannan Wang
Ruimin Hu
Xinbo Gao
AAML
42
1
0
16 Jun 2024
An Adversarial Robustness Perspective on the Topology of Neural Networks
An Adversarial Robustness Perspective on the Topology of Neural Networks
Morgane Goibert
Thomas Ricatte
Elvis Dohmatob
AAML
11
2
0
04 Nov 2022
A Unified Game-Theoretic Interpretation of Adversarial Robustness
A Unified Game-Theoretic Interpretation of Adversarial Robustness
Jie Ren
Die Zhang
Yisen Wang
Lu Chen
Zhanpeng Zhou
...
Xu Cheng
Xin Eric Wang
Meng Zhou
Jie Shi
Quanshi Zhang
AAML
66
22
0
12 Mar 2021
Patch-wise Attack for Fooling Deep Neural Network
Patch-wise Attack for Fooling Deep Neural Network
Lianli Gao
Qilong Zhang
Jingkuan Song
Xianglong Liu
Heng Tao Shen
AAML
24
137
0
14 Jul 2020
Proper Network Interpretability Helps Adversarial Robustness in
  Classification
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
19
66
0
26 Jun 2020
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder
Guanlin Li
Shuya Ding
Jun-Jie Luo
Chang-rui Liu
AAML
42
19
0
06 May 2020
Defending against Backdoor Attack on Deep Neural Networks
Defending against Backdoor Attack on Deep Neural Networks
Kaidi Xu
Sijia Liu
Pin-Yu Chen
Pu Zhao
X. Lin
Xue Lin
AAML
18
47
0
26 Feb 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
38
300
0
08 Jan 2020
On the Design of Black-box Adversarial Examples by Leveraging
  Gradient-free Optimization and Operator Splitting Method
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
Pu Zhao
Sijia Liu
Pin-Yu Chen
Nghia Hoang
Kaidi Xu
B. Kailkhura
Xue Lin
AAML
25
54
0
26 Jul 2019
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
261
3,109
0
04 Nov 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
281
5,835
0
08 Jul 2016
1