ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00851
  4. Cited By
Provable defenses against adversarial examples via the convex outer
  adversarial polytope
v1v2v3 (latest)

Provable defenses against adversarial examples via the convex outer adversarial polytope

2 November 2017
Eric Wong
J. Zico Kolter
    AAML
ArXiv (abs)PDFHTMLGithub (387★)

Papers citing "Provable defenses against adversarial examples via the convex outer adversarial polytope"

50 / 957 papers shown
Title
Physically Realizable Adversarial Examples for LiDAR Object Detection
Physically Realizable Adversarial Examples for LiDAR Object DetectionComputer Vision and Pattern Recognition (CVPR), 2020
James Tu
Mengye Ren
S. Manivasagam
Ming Liang
Binh Yang
Richard Du
Frank Cheng
R. Urtasun
3DPC
226
275
0
01 Apr 2020
Safety-Aware Hardening of 3D Object Detection Neural Network Systems
Safety-Aware Hardening of 3D Object Detection Neural Network SystemsInternational Conference on Computer Safety, Reliability, and Security (SAFECOMP), 2020
Chih-Hong Cheng
3DPC
228
12
0
25 Mar 2020
ARDA: Automatic Relational Data Augmentation for Machine Learning
ARDA: Automatic Relational Data Augmentation for Machine LearningProceedings of the VLDB Endowment (PVLDB), 2020
Nadiia Chepurko
Ryan Marcus
Emanuel Zgraggen
Raul Castro Fernandez
Tim Kraska
David R Karger
117
15
0
21 Mar 2020
Quantum noise protects quantum classifiers against adversaries
Quantum noise protects quantum classifiers against adversariesPhysical Review Research (PRResearch), 2020
Yuxuan Du
Min-hsiu Hsieh
Tongliang Liu
Dacheng Tao
Nana Liu
AAML
149
131
0
20 Mar 2020
Robust Deep Reinforcement Learning against Adversarial Perturbations on
  State Observations
Robust Deep Reinforcement Learning against Adversarial Perturbations on State ObservationsNeural Information Processing Systems (NeurIPS), 2020
Huan Zhang
Hongge Chen
Chaowei Xiao
Yue Liu
Mingyan D. Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
381
341
0
19 Mar 2020
RAB: Provable Robustness Against Backdoor Attacks
RAB: Provable Robustness Against Backdoor AttacksIEEE Symposium on Security and Privacy (IEEE S&P), 2020
Maurice Weber
Xiaojun Xu
Bojan Karlas
Ce Zhang
Yue Liu
AAML
568
182
0
19 Mar 2020
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence
Vulnerabilities of Connectionist AI Applications: Evaluation and DefenceFrontiers in Big Data (Front. Big Data), 2020
Christian Berghoff
Matthias Neu
Arndt von Twickel
AAML
193
26
0
18 Mar 2020
Diversity can be Transferred: Output Diversification for White- and
  Black-box Attacks
Diversity can be Transferred: Output Diversification for White- and Black-box Attacks
Y. Tashiro
Yang Song
Stefano Ermon
AAML
210
14
0
15 Mar 2020
Certified Defenses for Adversarial Patches
Certified Defenses for Adversarial PatchesInternational Conference on Learning Representations (ICLR), 2020
Ping Yeh-Chiang
Renkun Ni
Ahmed Abdelkader
Chen Zhu
Christoph Studer
Tom Goldstein
AAML
132
187
0
14 Mar 2020
Topological Effects on Attacks Against Vertex Classification
Topological Effects on Attacks Against Vertex Classification
B. A. Miller
Mustafa Çamurcu
Alexander J. Gomez
Kevin S. Chan
Tina Eliassi-Rad
AAML
89
2
0
12 Mar 2020
Exploiting Verified Neural Networks via Floating Point Numerical Error
Exploiting Verified Neural Networks via Floating Point Numerical ErrorSensors Applications Symposium (SA), 2020
Kai Jia
Martin Rinard
AAML
239
43
0
06 Mar 2020
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
Hadi Salman
Mingjie Sun
Greg Yang
Ashish Kapoor
J. Zico Kolter
198
23
0
04 Mar 2020
Hidden Cost of Randomized Smoothing
Hidden Cost of Randomized Smoothing
Jeet Mohapatra
Ching-Yun Ko
Tsui-Wei Weng
Weng
Sijia Liu
Pin-Yu Chen
Luca Daniel
AAML
199
11
0
02 Mar 2020
Exactly Computing the Local Lipschitz Constant of ReLU Networks
Exactly Computing the Local Lipschitz Constant of ReLU NetworksNeural Information Processing Systems (NeurIPS), 2020
Matt Jordan
A. Dimakis
249
131
0
02 Mar 2020
Understanding the Intrinsic Robustness of Image Distributions using
  Conditional Generative Models
Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative ModelsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Xiao Zhang
Jinghui Chen
Quanquan Gu
David Evans
137
17
0
01 Mar 2020
Improving Certified Robustness via Statistical Learning with Logical
  Reasoning
Improving Certified Robustness via Statistical Learning with Logical ReasoningNeural Information Processing Systems (NeurIPS), 2020
Zhuolin Yang
Zhikuan Zhao
Wei Ping
Jiawei Zhang
Linyi Li
...
Bojan Karlas
Ji Liu
Heng Guo
Ce Zhang
Yue Liu
AAML
568
15
0
28 Feb 2020
Automatic Perturbation Analysis for Scalable Certified Robustness and
  Beyond
Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond
Kaidi Xu
Zhouxing Shi
Huan Zhang
Yihan Wang
Kai-Wei Chang
Shiyu Huang
B. Kailkhura
Xinyu Lin
Cho-Jui Hsieh
AAML
267
15
0
28 Feb 2020
Certified Defense to Image Transformations via Randomized Smoothing
Certified Defense to Image Transformations via Randomized SmoothingNeural Information Processing Systems (NeurIPS), 2020
Marc Fischer
Maximilian Baader
Martin Vechev
AAML
466
73
0
27 Feb 2020
TSS: Transformation-Specific Smoothing for Robustness Certification
TSS: Transformation-Specific Smoothing for Robustness CertificationConference on Computer and Communications Security (CCS), 2020
Linyi Li
Maurice Weber
Xiaojun Xu
Luka Rimanic
B. Kailkhura
Tao Xie
Ce Zhang
Yue Liu
AAML
416
61
0
27 Feb 2020
Overfitting in adversarially robust deep learning
Overfitting in adversarially robust deep learningInternational Conference on Machine Learning (ICML), 2020
Leslie Rice
Eric Wong
Zico Kolter
517
883
0
26 Feb 2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Attacks Which Do Not Kill Training Make Adversarial Learning StrongerInternational Conference on Machine Learning (ICML), 2020
Jingfeng Zhang
Xilie Xu
Bo Han
Gang Niu
Li-zhen Cui
Masashi Sugiyama
Mohan S. Kankanhalli
AAML
140
438
0
26 Feb 2020
Towards Backdoor Attacks and Defense in Robust Machine Learning Models
Towards Backdoor Attacks and Defense in Robust Machine Learning ModelsComputers & security (CS), 2020
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
AAML
181
14
0
25 Feb 2020
HYDRA: Pruning Adversarially Robust Neural Networks
HYDRA: Pruning Adversarially Robust Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
196
25
0
24 Feb 2020
Precise Tradeoffs in Adversarial Training for Linear Regression
Precise Tradeoffs in Adversarial Training for Linear RegressionAnnual Conference Computational Learning Theory (COLT), 2020
Adel Javanmard
Mahdi Soltanolkotabi
Hamed Hassani
AAML
153
121
0
24 Feb 2020
Lagrangian Decomposition for Neural Network Verification
Lagrangian Decomposition for Neural Network VerificationConference on Uncertainty in Artificial Intelligence (UAI), 2020
Rudy Bunel
Alessandro De Palma
Alban Desmaison
Krishnamurthy Dvijotham
Pushmeet Kohli
Juil Sock
M. P. Kumar
256
52
0
24 Feb 2020
FR-Train: A Mutual Information-Based Approach to Fair and Robust
  Training
FR-Train: A Mutual Information-Based Approach to Fair and Robust TrainingInternational Conference on Machine Learning (ICML), 2020
Yuji Roh
Kangwook Lee
Steven Euijong Whang
Changho Suh
233
84
0
24 Feb 2020
Improving the Tightness of Convex Relaxation Bounds for Training
  Certifiably Robust Classifiers
Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers
Chen Zhu
Renkun Ni
Ping Yeh-Chiang
Hengduo Li
Furong Huang
Tom Goldstein
131
5
0
22 Feb 2020
Black-Box Certification with Randomized Smoothing: A Functional
  Optimization Based Framework
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based FrameworkNeural Information Processing Systems (NeurIPS), 2020
Dinghuai Zhang
Mao Ye
Chengyue Gong
Zhanxing Zhu
Qiang Liu
AAML
208
68
0
21 Feb 2020
Randomized Smoothing of All Shapes and Sizes
Randomized Smoothing of All Shapes and SizesInternational Conference on Machine Learning (ICML), 2020
Greg Yang
Tony Duan
J. E. Hu
Hadi Salman
Ilya P. Razenshteyn
Jungshian Li
AAML
383
228
0
19 Feb 2020
Indirect Adversarial Attacks via Poisoning Neighbors for Graph
  Convolutional Networks
Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks
Tsubasa Takahashi
GNNAAML
180
44
0
19 Feb 2020
Deflecting Adversarial Attacks
Deflecting Adversarial Attacks
Yao Qin
Nicholas Frosst
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
AAML
135
16
0
18 Feb 2020
Regularized Training and Tight Certification for Randomized Smoothed
  Classifier with Provable Robustness
Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable RobustnessAAAI Conference on Artificial Intelligence (AAAI), 2020
Huijie Feng
Chunpeng Wu
Guoyang Chen
Weifeng Zhang
Y. Ning
AAML
115
12
0
17 Feb 2020
Over-parameterized Adversarial Training: An Analysis Overcoming the
  Curse of Dimensionality
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of DimensionalityNeural Information Processing Systems (NeurIPS), 2020
Yi Zhang
Orestis Plevrakis
S. Du
Xingguo Li
Zhao Song
Sanjeev Arora
212
55
0
16 Feb 2020
Robustness Verification for Transformers
Robustness Verification for TransformersInternational Conference on Learning Representations (ICLR), 2020
Zhouxing Shi
Huan Zhang
Kai-Wei Chang
Shiyu Huang
Cho-Jui Hsieh
AAML
182
124
0
16 Feb 2020
Adversarial Distributional Training for Robust Deep Learning
Adversarial Distributional Training for Robust Deep LearningNeural Information Processing Systems (NeurIPS), 2020
Yinpeng Dong
Zhijie Deng
Tianyu Pang
Hang Su
Jun Zhu
OOD
173
137
0
14 Feb 2020
More Data Can Expand the Generalization Gap Between Adversarially Robust
  and Standard Models
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard ModelsInternational Conference on Machine Learning (ICML), 2020
Lin Chen
Yifei Min
Mingrui Zhang
Amin Karbasi
OOD
237
66
0
11 Feb 2020
Adversarial Robustness for Code
Adversarial Robustness for CodeInternational Conference on Machine Learning (ICML), 2020
Pavol Bielik
Martin Vechev
AAML
188
94
0
11 Feb 2020
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial
  Perturbations
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial PerturbationsInternational Conference on Machine Learning (ICML), 2020
Florian Tramèr
Jens Behrmann
Nicholas Carlini
Nicolas Papernot
J. Jacobsen
AAMLSILM
150
97
0
11 Feb 2020
Semialgebraic Optimization for Lipschitz Constants of ReLU Networks
Semialgebraic Optimization for Lipschitz Constants of ReLU Networks
Tong Chen
J. Lasserre
Victor Magron
Edouard Pauwels
155
3
0
10 Feb 2020
Curse of Dimensionality on Randomized Smoothing for Certifiable
  Robustness
Curse of Dimensionality on Randomized Smoothing for Certifiable RobustnessInternational Conference on Machine Learning (ICML), 2020
Aounon Kumar
Alexander Levine
Tom Goldstein
Soheil Feizi
159
102
0
08 Feb 2020
An Analysis of Adversarial Attacks and Defenses on Autonomous Driving
  Models
An Analysis of Adversarial Attacks and Defenses on Autonomous Driving ModelsAnnual IEEE International Conference on Pervasive Computing and Communications (PerCom), 2020
Yao Deng
Xi Zheng
Tianyi Zhang
Chen Chen
Guannan Lou
Yang Wang
AAML
141
160
0
06 Feb 2020
Regularizers for Single-step Adversarial Training
Regularizers for Single-step Adversarial Training
S. VivekB.
R. Venkatesh Babu
AAML
85
7
0
03 Feb 2020
Evaluating Robustness to Context-Sensitive Feature Perturbations of
  Different Granularities
Evaluating Robustness to Context-Sensitive Feature Perturbations of Different Granularities
Isaac Dunn
Laura Hanu
Hadrien Pouget
Daniel Kroening
T. Melham
AAML
164
4
0
29 Jan 2020
Weighted Average Precision: Adversarial Example Detection in the Visual
  Perception of Autonomous Vehicles
Weighted Average Precision: Adversarial Example Detection in the Visual Perception of Autonomous Vehicles
Yilan Li
Senem Velipasalar
AAML
143
8
0
25 Jan 2020
Safety Concerns and Mitigation Approaches Regarding the Use of Deep
  Learning in Safety-Critical Perception Tasks
Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks
Oliver Willers
Sebastian Sudholt
Shervin Raafatnia
Stephanie Abrecht
223
82
0
22 Jan 2020
GhostImage: Remote Perception Attacks against Camera-based Image
  Classification Systems
GhostImage: Remote Perception Attacks against Camera-based Image Classification Systems
Yanmao Man
Ming Li
Ryan M. Gerdes
AAML
170
8
0
21 Jan 2020
Fast is better than free: Revisiting adversarial training
Fast is better than free: Revisiting adversarial trainingInternational Conference on Learning Representations (ICLR), 2020
Eric Wong
Leslie Rice
J. Zico Kolter
AAMLOOD
709
1,287
0
12 Jan 2020
ReluDiff: Differential Verification of Deep Neural Networks
ReluDiff: Differential Verification of Deep Neural NetworksInternational Conference on Software Engineering (ICSE), 2020
Brandon Paulsen
Jingbo Wang
Chao Wang
265
59
0
10 Jan 2020
Guess First to Enable Better Compression and Adversarial Robustness
Guess First to Enable Better Compression and Adversarial Robustness
Sicheng Zhu
Bang An
Shiyu Niu
AAML
95
0
0
10 Jan 2020
MACER: Attack-free and Scalable Robust Training via Maximizing Certified
  Radius
MACER: Attack-free and Scalable Robust Training via Maximizing Certified RadiusInternational Conference on Learning Representations (ICLR), 2020
Runtian Zhai
Chen Dan
Di He
Huan Zhang
Boqing Gong
Pradeep Ravikumar
Cho-Jui Hsieh
Liwei Wang
OODAAML
494
188
0
08 Jan 2020
Previous
123...131415...181920
Next