ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.09032
  4. Cited By
Outside the Box: Abstraction-Based Monitoring of Neural Networks

Outside the Box: Abstraction-Based Monitoring of Neural Networks

20 November 2019
T. Henzinger
Anna Lukina
Christian Schilling
    AAML
ArXivPDFHTML

Papers citing "Outside the Box: Abstraction-Based Monitoring of Neural Networks"

13 / 13 papers shown
Title
Enhancing System Self-Awareness and Trust of AI: A Case Study in Trajectory Prediction and Planning
Enhancing System Self-Awareness and Trust of AI: A Case Study in Trajectory Prediction and Planning
Lars Ullrich
Zurab Mujirishvili
Knut Graichen
54
0
0
25 Apr 2025
Attention Masks Help Adversarial Attacks to Bypass Safety Detectors
Attention Masks Help Adversarial Attacks to Bypass Safety Detectors
Yunfan Shi
AAML
32
0
0
07 Nov 2024
Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path
  Forward
Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward
Xuan Xie
Jiayang Song
Zhehua Zhou
Yuheng Huang
Da Song
Lei Ma
OffRL
39
6
0
12 Apr 2024
SMARLA: A Safety Monitoring Approach for Deep Reinforcement Learning
  Agents
SMARLA: A Safety Monitoring Approach for Deep Reinforcement Learning Agents
Amirhossein Zolfagharian
Manel Abdellatif
Lionel C. Briand
S. Ramesh
25
5
0
03 Aug 2023
What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled
  Safety Critical Systems
What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled Safety Critical Systems
Saddek Bensalem
Chih-Hong Cheng
Wei Huang
Xiaowei Huang
Changshun Wu
Xingyu Zhao
AAML
19
6
0
20 Jul 2023
A Survey of Safety and Trustworthiness of Large Language Models through
  the Lens of Verification and Validation
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
Xiaowei Huang
Wenjie Ruan
Wei Huang
Gao Jin
Yizhen Dong
...
Sihao Wu
Peipei Xu
Dengyu Wu
André Freitas
Mustafa A. Mustafa
ALM
29
81
0
19 May 2023
Out-Of-Distribution Detection Is Not All You Need
Out-Of-Distribution Detection Is Not All You Need
Joris Guérin
Kevin Delmas
Raul Sena Ferreira
Jérémie Guiochet
OODD
27
32
0
29 Nov 2022
Prioritizing Corners in OoD Detectors via Symbolic String Manipulation
Prioritizing Corners in OoD Detectors via Symbolic String Manipulation
Chih-Hong Cheng
Changshun Wu
Emmanouil Seferis
Saddek Bensalem
26
3
0
16 May 2022
Beyond Robustness: A Taxonomy of Approaches towards Resilient
  Multi-Robot Systems
Beyond Robustness: A Taxonomy of Approaches towards Resilient Multi-Robot Systems
Amanda Prorok
Matthew Malencia
Luca Carlone
Gaurav Sukhatme
Brian M. Sadler
Vijay R. Kumar
90
53
0
25 Sep 2021
Run-Time Monitoring of Machine Learning for Robotic Perception: A Survey
  of Emerging Trends
Run-Time Monitoring of Machine Learning for Robotic Perception: A Survey of Emerging Trends
Q. Rahman
Peter Corke
Feras Dayoub
OOD
27
51
0
05 Jan 2021
Provably-Robust Runtime Monitoring of Neuron Activation Patterns
Provably-Robust Runtime Monitoring of Neuron Activation Patterns
Chih-Hong Cheng
AAML
25
12
0
24 Nov 2020
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
270
5,660
0
05 Dec 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
282
9,136
0
06 Jun 2015
1