ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.08452
  4. Cited By
Globally-Robust Neural Networks

Globally-Robust Neural Networks

16 February 2021
Klas Leino
Zifan Wang
Matt Fredrikson
    AAML
    OOD
ArXivPDFHTML

Papers citing "Globally-Robust Neural Networks"

15 / 15 papers shown
Title
Graph of Attacks: Improved Black-Box and Interpretable Jailbreaks for LLMs
Graph of Attacks: Improved Black-Box and Interpretable Jailbreaks for LLMs
Mohammad Akbar-Tajari
Mohammad Taher Pilehvar
Mohammad Mahmoody
AAML
46
0
0
26 Apr 2025
On the uncertainty principle of neural networks
On the uncertainty principle of neural networks
Jun-Jie Zhang
Dong-xiao Zhang
Jian-Nan Chen
L. Pang
Deyu Meng
55
2
0
17 Jan 2025
Adversarial Robustification via Text-to-Image Diffusion Models
Adversarial Robustification via Text-to-Image Diffusion Models
Daewon Choi
Jongheon Jeong
Huiwon Jang
Jinwoo Shin
DiffM
30
1
0
26 Jul 2024
SPLITZ: Certifiable Robustness via Split Lipschitz Randomized Smoothing
SPLITZ: Certifiable Robustness via Split Lipschitz Randomized Smoothing
Meiyu Zhong
Ravi Tandon
21
3
0
03 Jul 2024
Verifiable Boosted Tree Ensembles
Verifiable Boosted Tree Ensembles
Stefano Calzavara
Lorenzo Cazzaro
Claudio Lucchese
Giulio Ermanno Pibiri
AAML
25
0
0
22 Feb 2024
1-Lipschitz Neural Networks are more expressive with N-Activations
1-Lipschitz Neural Networks are more expressive with N-Activations
Bernd Prach
Christoph H. Lampert
AAML
FAtt
8
0
0
10 Nov 2023
Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization
Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization
Mahyar Fazlyab
Taha Entesari
Aniket Roy
Ramalingam Chellappa
AAML
11
11
0
29 Sep 2023
When to Trust AI: Advances and Challenges for Certification of Neural
  Networks
When to Trust AI: Advances and Challenges for Certification of Neural Networks
M. Kwiatkowska
Xiyue Zhang
AAML
6
8
0
20 Sep 2023
RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers
  via Randomized Deletion
RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers via Randomized Deletion
Zhuoqun Huang
Neil G. Marchant
Keane Lucas
Lujo Bauer
O. Ohrimenko
Benjamin I. P. Rubinstein
AAML
15
14
0
31 Jan 2023
Improved techniques for deterministic l2 robustness
Improved techniques for deterministic l2 robustness
Sahil Singla
S. Feizi
AAML
12
9
0
15 Nov 2022
Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz
  Networks
Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz Networks
Bernd Prach
Christoph H. Lampert
24
35
0
05 Aug 2022
Provably Adversarially Robust Nearest Prototype Classifiers
Provably Adversarially Robust Nearest Prototype Classifiers
Václav Voráček
Matthias Hein
AAML
15
11
0
14 Jul 2022
Training Certifiably Robust Neural Networks with Efficient Local
  Lipschitz Bounds
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds
Yujia Huang
Huan Zhang
Yuanyuan Shi
J Zico Kolter
Anima Anandkumar
14
76
0
02 Nov 2021
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
108
349
0
04 Oct 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
22
25
0
20 Mar 2021
1