ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.11299
  4. Cited By
Certifiers Make Neural Networks Vulnerable to Availability Attacks

Certifiers Make Neural Networks Vulnerable to Availability Attacks

25 August 2021
Tobias Lorenz
Marta Kwiatkowska
Mario Fritz
    AAML
    SILM
ArXivPDFHTML

Papers citing "Certifiers Make Neural Networks Vulnerable to Availability Attacks"

5 / 5 papers shown
Title
Globally-Robust Neural Networks
Globally-Robust Neural Networks
Klas Leino
Zifan Wang
Matt Fredrikson
AAML
OOD
80
126
0
16 Feb 2021
Deep Continuous Fusion for Multi-Sensor 3D Object Detection
Deep Continuous Fusion for Multi-Sensor 3D Object Detection
Ming Liang
Binh Yang
Shenlong Wang
R. Urtasun
3DPC
208
841
0
20 Dec 2020
CNN-Cert: An Efficient Framework for Certifying Robustness of
  Convolutional Neural Networks
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Akhilan Boopathy
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Luca Daniel
AAML
108
138
0
29 Nov 2018
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
251
1,842
0
03 Feb 2017
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
Marta Kwiatkowska
Sen Wang
Min Wu
AAML
183
933
0
21 Oct 2016
1