ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00851
  4. Cited By
Provable defenses against adversarial examples via the convex outer
  adversarial polytope
v1v2v3 (latest)

Provable defenses against adversarial examples via the convex outer adversarial polytope

2 November 2017
Eric Wong
J. Zico Kolter
    AAML
ArXiv (abs)PDFHTMLGithub (387★)

Papers citing "Provable defenses against adversarial examples via the convex outer adversarial polytope"

50 / 957 papers shown
Title
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Eric Wong
Frank R. Schmidt
J. Zico Kolter
AAML
226
222
0
21 Feb 2019
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
G. Ding
Luyu Wang
Xiaomeng Jin
169
195
0
20 Feb 2019
Fast Neural Network Verification via Shadow Prices
Fast Neural Network Verification via Shadow Prices
Vicencc Rubies-Royo
Roberto Calandra
D. Stipanović
Claire Tomlin
AAML
193
43
0
19 Feb 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELMAAML
425
954
0
18 Feb 2019
VC Classes are Adversarially Robustly Learnable, but Only Improperly
VC Classes are Adversarially Robustly Learnable, but Only ImproperlyAnnual Conference Computational Learning Theory (COLT), 2019
Omar Montasser
Steve Hanneke
Nathan Srebro
234
145
0
12 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
811
2,265
0
08 Feb 2019
Robustness Certificates Against Adversarial Examples for ReLU Networks
Robustness Certificates Against Adversarial Examples for ReLU Networks
Sahil Singla
Soheil Feizi
AAML
134
21
0
01 Feb 2019
A New Family of Neural Networks Provably Resistant to Adversarial
  Attacks
A New Family of Neural Networks Provably Resistant to Adversarial Attacks
Rakshit Agrawal
Luca de Alfaro
D. Helmbold
AAMLOOD
58
2
0
01 Feb 2019
Augmenting Model Robustness with Transformation-Invariant Attacks
Augmenting Model Robustness with Transformation-Invariant Attacks
Houpu Yao
Zhe Wang
Guangyu Nie
Yassine Mazboudi
Yezhou Yang
Yi Ren
AAMLOOD
130
3
0
31 Jan 2019
A Simple Explanation for the Existence of Adversarial Examples with
  Small Hamming Distance
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
A. Shamir
Itay Safran
Eyal Ronen
O. Dunkelman
GANAAML
126
96
0
30 Jan 2019
Defense Methods Against Adversarial Examples for Recurrent Neural
  Networks
Defense Methods Against Adversarial Examples for Recurrent Neural Networks
Ishai Rosenberg
A. Shabtai
Yuval Elovici
Lior Rokach
AAMLGAN
192
44
0
28 Jan 2019
Characterizing the Shape of Activation Space in Deep Neural Networks
Characterizing the Shape of Activation Space in Deep Neural Networks
Thomas Gebhart
Paul Schrater
Alan Hylton
AAML
131
7
0
28 Jan 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
287
520
0
27 Jan 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Sai Li
690
2,832
0
24 Jan 2019
The Limitations of Adversarial Training and the Blind-Spot Attack
The Limitations of Adversarial Training and the Blind-Spot Attack
Huan Zhang
Hongge Chen
Zhao Song
Duane S. Boning
Inderjit S. Dhillon
Cho-Jui Hsieh
AAML
169
153
0
15 Jan 2019
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic
  Approach
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach
Tsui-Wei Weng
Pin-Yu Chen
Lam M. Nguyen
M. Squillante
Ivan Oseledets
Luca Daniel
AAML
198
30
0
18 Dec 2018
A Survey of Safety and Trustworthiness of Deep Neural Networks:
  Verification, Testing, Adversarial Attack and Defence, and Interpretability
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
Xiaowei Huang
Daniel Kroening
Wenjie Ruan
Marta Kwiatkowska
Youcheng Sun
Emese Thamo
Min Wu
Xinping Yi
AAML
428
52
0
18 Dec 2018
Designing Adversarially Resilient Classifiers using Resilient Feature
  Engineering
Designing Adversarially Resilient Classifiers using Resilient Feature Engineering
Kevin Eykholt
A. Prakash
AAML
108
4
0
17 Dec 2018
Why ReLU networks yield high-confidence predictions far away from the
  training data and how to mitigate the problem
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
352
597
0
13 Dec 2018
On the Security of Randomized Defenses Against Adversarial Samples
On the Security of Randomized Defenses Against Adversarial Samples
K. Sharad
G. Marson
H. Truong
Ghassan O. Karame
AAML
106
1
0
11 Dec 2018
Adversarial Attacks, Regression, and Numerical Stability Regularization
Adversarial Attacks, Regression, and Numerical Stability Regularization
A. Nguyen
Edward Raff
AAML
121
32
0
07 Dec 2018
CNN-Cert: An Efficient Framework for Certifying Robustness of
  Convolutional Neural Networks
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Akhilan Boopathy
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Luca Daniel
AAML
263
145
0
29 Nov 2018
A randomized gradient-free attack on ReLU networks
A randomized gradient-free attack on ReLU networks
Francesco Croce
Matthias Hein
AAML
123
23
0
28 Nov 2018
Strong mixed-integer programming formulations for trained neural
  networks
Strong mixed-integer programming formulations for trained neural networksMathematical programming (Math. Program.), 2018
Ross Anderson
Joey Huchette
Christian Tjandraatmadja
J. Vielma
366
283
0
20 Nov 2018
Lightweight Lipschitz Margin Training for Certified Defense against
  Adversarial Examples
Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples
Hajime Ono
Tsubasa Takahashi
Kazuya Kakizaki
AAML
70
4
0
20 Nov 2018
A Statistical Approach to Assessing Neural Network Robustness
A Statistical Approach to Assessing Neural Network RobustnessInternational Conference on Learning Representations (ICLR), 2018
Stefan Webb
Tom Rainforth
Yee Whye Teh
M. P. Kumar
AAML
259
92
0
17 Nov 2018
nn-dependability-kit: Engineering Neural Networks for Safety-Critical
  Autonomous Driving Systems
nn-dependability-kit: Engineering Neural Networks for Safety-Critical Autonomous Driving Systems
Chih-Hong Cheng
Chung-Hao Huang
Georg Nührenberg
146
11
0
16 Nov 2018
A Spectral View of Adversarially Robust Features
A Spectral View of Adversarially Robust FeaturesNeural Information Processing Systems (NeurIPS), 2018
Shivam Garg
Willie Neiswanger
B. Zhang
Gregory Valiant
AAML
220
22
0
15 Nov 2018
Theoretical Analysis of Adversarial Learning: A Minimax Approach
Theoretical Analysis of Adversarial Learning: A Minimax Approach
Zhuozhuo Tu
Jingwei Zhang
Dacheng Tao
AAML
126
71
0
13 Nov 2018
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine LearningConference on Computer and Communications Security (CCS), 2018
K. Makarychev
Pascal Dupré
Yury Makarychev
Giancarlo Pellegrino
Dan Boneh
AAML
231
65
0
08 Nov 2018
MixTrain: Scalable Training of Verifiably Robust Neural Networks
MixTrain: Scalable Training of Verifiably Robust Neural Networks
Yue Zhang
Yizheng Chen
Ahmed Abdou
Mohsen Guizani
AAML
90
25
0
06 Nov 2018
Semidefinite relaxations for certifying robustness to adversarial
  examples
Semidefinite relaxations for certifying robustness to adversarial examples
Aditi Raghunathan
Jacob Steinhardt
Abigail Z. Jacobs
AAML
299
449
0
02 Nov 2018
Efficient Neural Network Robustness Certification with General
  Activation Functions
Efficient Neural Network Robustness Certification with General Activation Functions
Huan Zhang
Tsui-Wei Weng
Pin-Yu Chen
Cho-Jui Hsieh
Luca Daniel
AAML
320
831
0
02 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Abigail Z. Jacobs
265
267
0
02 Nov 2018
On the Geometry of Adversarial Examples
On the Geometry of Adversarial Examples
Marc Khoury
Dylan Hadfield-Menell
AAML
240
83
0
01 Nov 2018
On the Effectiveness of Interval Bound Propagation for Training
  Verifiably Robust Models
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Sven Gowal
Krishnamurthy Dvijotham
Robert Stanforth
Rudy Bunel
Chongli Qin
J. Uesato
Relja Arandjelović
Timothy A. Mann
Pushmeet Kohli
AAML
371
599
0
30 Oct 2018
Logit Pairing Methods Can Fool Gradient-Based Attacks
Logit Pairing Methods Can Fool Gradient-Based Attacks
Marius Mosbach
Maksym Andriushchenko
T. A. Trost
Matthias Hein
Dietrich Klakow
AAML
110
85
0
29 Oct 2018
Rademacher Complexity for Adversarially Robust Generalization
Rademacher Complexity for Adversarially Robust Generalization
Dong Yin
Kannan Ramchandran
Peter L. Bartlett
AAML
347
278
0
29 Oct 2018
RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix
  of Neural Networks and Its Applications
RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications
Huan Zhang
Pengchuan Zhang
Cho-Jui Hsieh
AAML
132
66
0
28 Oct 2018
Towards Robust Deep Neural Networks
Towards Robust Deep Neural Networks
Timothy E. Wang
Jack Gu
D. Mehta
Xiaojun Zhao
Edgar A. Bernal
OOD
215
11
0
27 Oct 2018
Robust Adversarial Learning via Sparsifying Front Ends
Robust Adversarial Learning via Sparsifying Front Ends
S. Gopalakrishnan
Zhinus Marzi
Metehan Cekic
Upamanyu Madhow
Ramtin Pedarsani
AAML
138
3
0
24 Oct 2018
Adversarial Risk Bounds via Function Transformation
Adversarial Risk Bounds via Function Transformation
Justin Khim
Po-Ling Loh
AAML
132
50
0
22 Oct 2018
Cost-Sensitive Robustness against Adversarial Examples
Cost-Sensitive Robustness against Adversarial Examples
Xiao Zhang
David Evans
AAML
172
26
0
22 Oct 2018
Provable Robustness of ReLU networks via Maximization of Linear Regions
Provable Robustness of ReLU networks via Maximization of Linear Regions
Francesco Croce
Maksym Andriushchenko
Matthias Hein
215
169
0
17 Oct 2018
Combinatorial Attacks on Binarized Neural Networks
Combinatorial Attacks on Binarized Neural Networks
Elias Boutros Khalil
Amrita Gupta
B. Dilkina
AAML
166
42
0
08 Oct 2018
Empirical Bounds on Linear Regions of Deep Rectifier Networks
Empirical Bounds on Linear Regions of Deep Rectifier Networks
Thiago Serra
Srikumar Ramalingam
288
43
0
08 Oct 2018
Feature Prioritization and Regularization Improve Standard Accuracy and
  Adversarial Robustness
Feature Prioritization and Regularization Improve Standard Accuracy and Adversarial Robustness
Chihuang Liu
Joseph Jaja
AAML
169
13
0
04 Oct 2018
Verification for Machine Learning, Autonomy, and Neural Networks Survey
Verification for Machine Learning, Autonomy, and Neural Networks Survey
Weiming Xiang
Patrick Musau
A. Wild
Diego Manzanas Lopez
Nathaniel P. Hamilton
Xiaodong Yang
Joel A. Rosenfeld
Taylor T. Johnson
160
105
0
03 Oct 2018
Adversarial Examples - A Complete Characterisation of the Phenomenon
Adversarial Examples - A Complete Characterisation of the Phenomenon
A. Serban
E. Poll
Joost Visser
SILMAAML
196
49
0
02 Oct 2018
Improving the Generalization of Adversarial Training with Domain
  Adaptation
Improving the Generalization of Adversarial Training with Domain Adaptation
Chuanbiao Song
Kun He
Liwei Wang
John E. Hopcroft
AAMLOOD
252
144
0
01 Oct 2018
Previous
123...17181920
Next