ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.13870
13
0

Scene Uncertainty and the Wellington Posterior of Deterministic Image Classifiers

25 June 2021
Stephanie Tsuei
Aditya Golatkar
Stefano Soatto
    UQCV
ArXivPDFHTML
Abstract

We propose a method to estimate the uncertainty of the outcome of an image classifier on a given input datum. Deep neural networks commonly used for image classification are deterministic maps from an input image to an output class. As such, their outcome on a given datum involves no uncertainty, so we must specify what variability we are referring to when defining, measuring and interpreting uncertainty, and attributing "confidence" to the outcome. To this end, we introduce the Wellington Posterior, which is the distribution of outcomes that would have been obtained in response to data that could have been generated by the same scene that produced the given image. Since there are infinitely many scenes that could have generated any given image, the Wellington Posterior involves inductive transfer from scenes other than the one portrayed. We explore the use of data augmentation, dropout, ensembling, single-view reconstruction, and model linearization to compute a Wellington Posterior. Additional methods include the use of conditional generative models such as generative adversarial networks, neural radiance fields, and conditional prior networks. We test these methods against the empirical posterior obtained by performing inference on multiple images of the same underlying scene. These developments are only a small step towards assessing the reliability of deep network classifiers in a manner that is compatible with safety-critical applications and human interpretation.

View on arXiv
Comments on this paper