ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.13425
  4. Cited By
Understanding Deep Neural Networks through Input Uncertainties

Understanding Deep Neural Networks through Input Uncertainties

31 October 2018
Jayaraman J. Thiagarajan
Irene Kim
Rushil Anirudh
P. Bremer
    UQCV
    AAML
ArXivPDFHTML

Papers citing "Understanding Deep Neural Networks through Input Uncertainties"

4 / 4 papers shown
Title
Designing Accurate Emulators for Scientific Processes using
  Calibration-Driven Deep Models
Designing Accurate Emulators for Scientific Processes using Calibration-Driven Deep Models
Jayaraman J. Thiagarajan
Bindya Venkatesh
Rushil Anirudh
P. Bremer
J. Gaffney
G. Anderson
B. Spears
18
21
0
05 May 2020
Explaining Deep Neural Networks with a Polynomial Time Algorithm for
  Shapley Values Approximation
Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
Marco Ancona
Cengiz Öztireli
Markus Gross
FAtt
TDI
30
223
0
26 Mar 2019
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
287
9,167
0
06 Jun 2015
1