ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.01030
8
12

Correctness Verification of Neural Networks

3 June 2019
Yichen Yang
Martin Rinard
    AAML
ArXivPDFHTML
Abstract

We present a novel framework for specifying and verifying correctness globally for neural networks on perception tasks. Most previous works on neural network verification for perception tasks focus on robustness verification. Unlike robustness verification, which aims to verify that the prediction of a network is stable in some local regions around labelled points, our framework provides a way to specify correctness globally in the whole target input space and verify that the network is correct for all target inputs (or find the regions where the network is not correct). We provide a specification through 1) a state space consisting of all relevant states of the world and 2) an observation process that produces neural network inputs from the states of the world. Tiling the state and input spaces with a finite number of tiles, obtaining ground truth bounds from the state tiles and network output bounds from the input tiles, then comparing the ground truth and network output bounds delivers an upper bound on the network output error for any inputs of interest. The presented framework also enables detecting illegal inputs -- inputs that are not contained in (or close to) the target input space as defined by the state space and observation process (the neural network is not designed to work on them), so that we can flag when we don't have guarantees. Results from two case studies highlight the ability of our technique to verify error bounds over the whole target input space and show how the error bounds vary over the state and input spaces.

View on arXiv
Comments on this paper