ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.01587
  4. Cited By
Exploring Adversarial Attack in Spiking Neural Networks with
  Spike-Compatible Gradient

Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient

1 January 2020
Ling Liang
Xing Hu
Lei Deng
Yujie Wu
Guoqi Li
Yufei Ding
Peng Li
Yuan Xie
    AAML
ArXivPDFHTML

Papers citing "Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient"

12 / 12 papers shown
Title
Input-Specific and Universal Adversarial Attack Generation for Spiking Neural Networks in the Spiking Domain
Input-Specific and Universal Adversarial Attack Generation for Spiking Neural Networks in the Spiking Domain
Spyridon Raptis
Haralampos-G. Stratigopoulos
AAML
26
0
0
07 May 2025
Understanding the Functional Roles of Modelling Components in Spiking Neural Networks
Understanding the Functional Roles of Modelling Components in Spiking Neural Networks
Huifeng Yin
Hanle Zheng
Jiayi Mao
Siyuan Ding
Xing Liu
M. Xu
Yifan Hu
Jing Pei
Lei Deng
50
1
0
28 Jan 2025
Uncovering the Representation of Spiking Neural Networks Trained with
  Surrogate Gradient
Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient
Yuhang Li
Youngeun Kim
Hyoungseob Park
Priyadarshini Panda
32
16
0
25 Apr 2023
Exploring Temporal Information Dynamics in Spiking Neural Networks
Exploring Temporal Information Dynamics in Spiking Neural Networks
Youngeun Kim
Yuhang Li
Hyoungseob Park
Yeshwanth Venkatesha
Anna Hambitzer
Priyadarshini Panda
19
32
0
26 Nov 2022
Adversarial Defense via Neural Oscillation inspired Gradient Masking
Adversarial Defense via Neural Oscillation inspired Gradient Masking
Chunming Jiang
Yilei Zhang
AAML
27
2
0
04 Nov 2022
Attacking the Spike: On the Transferability and Security of Spiking
  Neural Networks to Adversarial Examples
Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples
Nuo Xu
Kaleel Mahmood
Haowen Fang
Ethan Rathbun
Caiwen Ding
Wujie Wen
AAML
29
12
0
07 Sep 2022
Special Session: Towards an Agile Design Methodology for Efficient,
  Reliable, and Secure ML Systems
Special Session: Towards an Agile Design Methodology for Efficient, Reliable, and Secure ML Systems
Shail Dave
Alberto Marchisio
Muhammad Abdullah Hanif
Amira Guesmi
Aviral Shrivastava
Ihsen Alouani
Muhammad Shafique
31
13
0
18 Apr 2022
Toward Robust Spiking Neural Network Against Adversarial Perturbation
Toward Robust Spiking Neural Network Against Adversarial Perturbation
Ling Liang
Kaidi Xu
Xing Hu
Lei Deng
Yuan Xie
AAML
29
13
0
12 Apr 2022
Adversarial Attacks on Spiking Convolutional Neural Networks for
  Event-based Vision
Adversarial Attacks on Spiking Convolutional Neural Networks for Event-based Vision
Julian Buchel
Gregor Lenz
Yalun Hu
Sadique Sheik
M. Sorbaro
AAML
25
14
0
06 Oct 2021
DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking
  Neural Networks
DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks
Alberto Marchisio
Giacomo Pira
Maurizio Martina
Guido Masera
Muhammad Shafique
AAML
34
30
0
01 Jul 2021
Long short-term memory and learning-to-learn in networks of spiking
  neurons
Long short-term memory and learning-to-learn in networks of spiking neurons
G. Bellec
Darjan Salaj
Anand Subramoney
Robert Legenstein
Wolfgang Maass
119
481
0
26 Mar 2018
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,837
0
08 Jul 2016
1