Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1704.08847
Cited By
Parseval Networks: Improving Robustness to Adversarial Examples
28 April 2017
Moustapha Cissé
Piotr Bojanowski
Edouard Grave
Yann N. Dauphin
Nicolas Usunier
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Parseval Networks: Improving Robustness to Adversarial Examples"
50 / 487 papers shown
Title
Depthwise Separable Convolutions Allow for Fast and Memory-Efficient Spectral Normalization
Christina Runkel
Christian Etmann
Michael Möller
Carola-Bibiane Schönlieb
17
1
0
12 Feb 2021
Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons
Bohang Zhang
Tianle Cai
Zhou Lu
Di He
Liwei Wang
OOD
37
49
0
10 Feb 2021
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection
Hanshu Yan
Jingfeng Zhang
Gang Niu
Jiashi Feng
Vincent Y. F. Tan
Masashi Sugiyama
AAML
16
41
0
10 Feb 2021
SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation
Wuxinlin Cheng
Chenhui Deng
Zhiqiang Zhao
Yaohui Cai
Zhiru Zhang
Zhuo Feng
AAML
14
13
0
07 Feb 2021
Noise Optimization for Artificial Neural Networks
Li Xiao
Zeliang Zhang
Yijie Peng
31
13
0
06 Feb 2021
Regularization for convolutional kernel tensors to avoid unstable gradient problem in convolutional neural networks
Pei-Chang Guo
11
0
0
05 Feb 2021
A Comprehensive Evaluation Framework for Deep Model Robustness
Jun Guo
Wei Bao
Jiakai Wang
Yuqing Ma
Xing Gao
Gang Xiao
Aishan Liu
Zehao Zhao
Xianglong Liu
Wenjun Wu
AAML
ELM
30
54
0
24 Jan 2021
Fundamental Tradeoffs in Distributionally Adversarial Training
M. Mehrabi
Adel Javanmard
Ryan A. Rossi
Anup B. Rao
Tung Mai
AAML
20
17
0
15 Jan 2021
Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks
Rachel Sterneck
Abhishek Moitra
Priyadarshini Panda
AAML
11
8
0
05 Jan 2021
Improving Adversarial Robustness in Weight-quantized Neural Networks
Chang Song
Elias Fallon
Hai Helen Li
AAML
11
19
0
29 Dec 2020
Enhanced Regularizers for Attributional Robustness
A. Sarkar
Anirban Sarkar
V. Balasubramanian
19
16
0
28 Dec 2020
Discovering Robust Convolutional Architecture at Targeted Capacity: A Multi-Shot Approach
Xuefei Ning
J. Zhao
Wenshuo Li
Tianchen Zhao
Yin Zheng
Huazhong Yang
Yu Wang
AAML
34
5
0
22 Dec 2020
A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks
Qingsong Yao
Zecheng He
Yi Lin
Kai Ma
Yefeng Zheng
S. Kevin Zhou
AAML
MedIm
27
16
0
17 Dec 2020
A case for new neural network smoothness constraints
Mihaela Rosca
T. Weber
A. Gretton
S. Mohamed
AAML
25
48
0
14 Dec 2020
Regularizing Action Policies for Smooth Control with Reinforcement Learning
Siddharth Mysore
B. Mabsout
R. Mancuso
Kate Saenko
26
67
0
11 Dec 2020
Data-Dependent Randomized Smoothing
Motasem Alfarra
Adel Bibi
Philip H. S. Torr
Bernard Ghanem
UQCV
23
34
0
08 Dec 2020
Learning to Separate Clusters of Adversarial Representations for Robust Adversarial Detection
Byunggill Joe
Jihun Hamm
Sung Ju Hwang
Sooel Son
I. Shin
AAML
OOD
30
0
0
07 Dec 2020
Towards Natural Robustness Against Adversarial Examples
Haoyu Chu
Shikui Wei
Yao-Min Zhao
AAML
9
1
0
04 Dec 2020
Towards Defending Multiple
ℓ
p
\ell_p
ℓ
p
-norm Bounded Adversarial Perturbations via Gated Batch Normalization
Aishan Liu
Shiyu Tang
Xinyun Chen
Lei Huang
Zhuozhuo Tu
Xianglong Liu
Dacheng Tao
AAML
27
31
0
03 Dec 2020
From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation
Nikhil Kapoor
Andreas Bär
Serin Varghese
Jan David Schneider
Fabian Hüger
Peter Schlicht
Tim Fingscheidt
AAML
8
10
0
02 Dec 2020
Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks
Abhishek Moitra
Priyadarshini Panda
AAML
11
2
0
26 Nov 2020
Adversarially Robust Classification based on GLRT
Bhagyashree Puranik
Upamanyu Madhow
Ramtin Pedarsani
VLM
AAML
6
4
0
16 Nov 2020
Representing Deep Neural Networks Latent Space Geometries with Graphs
Carlos Lassance
Vincent Gripon
Antonio Ortega
AI4CE
14
15
0
14 Nov 2020
Risk Assessment for Machine Learning Models
Paul Schwerdtner
Florens Greßner
Nikhil Kapoor
F. Assion
René Sass
W. Günther
Fabian Hüger
Peter Schlicht
6
6
0
09 Nov 2020
Learning Efficient Task-Specific Meta-Embeddings with Word Prisms
Jingyi He
Kc Tsiolis
Kian Kenyon-Dean
Jackie C.K. Cheung
67
7
0
05 Nov 2020
Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?
Anna-Kathrin Kopetzki
Bertrand Charpentier
Daniel Zügner
Sandhya Giri
Stephan Günnemann
23
45
0
28 Oct 2020
A Dynamical View on Optimization Algorithms of Overparameterized Neural Networks
Zhiqi Bu
Shiyun Xu
Kan Chen
19
17
0
25 Oct 2020
Adversarial Robustness of Supervised Sparse Coding
Jeremias Sulam
Ramchandran Muthumukar
R. Arora
AAML
26
23
0
22 Oct 2020
Defense-guided Transferable Adversarial Attacks
Zifei Zhang
Kai Qiao
Jian Chen
Ningning Liang
AAML
12
0
0
22 Oct 2020
Improving Transformation Invariance in Contrastive Representation Learning
Adam Foster
Rattana Pukdee
Tom Rainforth
53
22
0
19 Oct 2020
Characterizing and Taming Model Instability Across Edge Devices
Eyal Cidon
Evgenya Pergament
Zain Asgar
Asaf Cidon
Sachin Katti
12
7
0
18 Oct 2020
Layer-wise Characterization of Latent Information Leakage in Federated Learning
Fan Mo
Anastasia Borovykh
Mohammad Malekzadeh
Hamed Haddadi
Soteris Demetriou
FedML
12
31
0
17 Oct 2020
Multi-Adversarial Learning for Cross-Lingual Word Embeddings
Haozhou Wang
James Henderson
Paola Merlo
GAN
45
8
0
16 Oct 2020
Increasing the Robustness of Semantic Segmentation Models with Painting-by-Numbers
Christoph Kamann
Burkhard Güssefeld
Robin Hutmacher
J. H. Metzen
Carsten Rother
8
18
0
12 Oct 2020
Constraining Logits by Bounded Function for Adversarial Robustness
Sekitoshi Kanai
Masanori Yamada
Shinýa Yamaguchi
Hiroshi Takahashi
Yasutoshi Ida
AAML
6
4
0
06 Oct 2020
Pruning Redundant Mappings in Transformer Models via Spectral-Normalized Identity Prior
Zi Lin
Jeremiah Zhe Liu
Ziao Yang
Nan Hua
Dan Roth
25
46
0
05 Oct 2020
Do Wider Neural Networks Really Help Adversarial Robustness?
Boxi Wu
Jinghui Chen
Deng Cai
Xiaofei He
Quanquan Gu
AAML
6
95
0
03 Oct 2020
Lipschitz neural networks are dense in the set of all Lipschitz functions
Stephan Eckstein
12
6
0
29 Sep 2020
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated Gradients
Yifei Huang
Yaodong Yu
Hongyang R. Zhang
Yi-An Ma
Yuan Yao
AAML
29
26
0
28 Sep 2020
Normalization Techniques in Training DNNs: Methodology, Analysis and Application
Lei Huang
Jie Qin
Yi Zhou
Fan Zhu
Li Liu
Ling Shao
AI4CE
10
254
0
27 Sep 2020
Large Norms of CNN Layers Do Not Hurt Adversarial Robustness
Youwei Liang
Dong Huang
6
11
0
17 Sep 2020
Input Hessian Regularization of Neural Networks
Waleed Mustafa
Robert A. Vandermeulen
Marius Kloft
AAML
9
12
0
14 Sep 2020
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
22
155
0
08 Sep 2020
Adversarially Robust Neural Architectures
Minjing Dong
Yanxi Li
Yunhe Wang
Chang Xu
AAML
OOD
34
48
0
02 Sep 2020
Privacy Preserving Recalibration under Domain Shift
Rachel Luo
Shengjia Zhao
Jiaming Song
Jonathan Kuck
Stefano Ermon
Silvio Savarese
6
3
0
21 Aug 2020
On transversality of bent hyperplane arrangements and the topological expressiveness of ReLU neural networks
J. E. Grigsby
Kathryn A. Lindsey
8
26
0
20 Aug 2020
Prevalence of Neural Collapse during the terminal phase of deep learning training
V. Papyan
Xuemei Han
D. Donoho
19
545
0
18 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
25
73
0
07 Aug 2020
Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations
Ziquan Liu
Yufei Cui
Antoni B. Chan
12
13
0
07 Aug 2020
Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks
Haoqiang Guo
Lu Peng
Jian Zhang
Fang Qi
Lide Duan
AAML
6
6
0
03 Aug 2020
Previous
1
2
3
4
5
6
...
8
9
10
Next