ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.11783
  4. Cited By
To Trust Or Not To Trust A Classifier

To Trust Or Not To Trust A Classifier

30 May 2018
Heinrich Jiang
Been Kim
Melody Y. Guan
Maya R. Gupta
    UQCV
ArXivPDFHTML

Papers citing "To Trust Or Not To Trust A Classifier"

14 / 64 papers shown
Title
The Right Tool for the Job: Matching Model and Instance Complexities
The Right Tool for the Job: Matching Model and Instance Complexities
Roy Schwartz
Gabriel Stanovsky
Swabha Swayamdipta
Jesse Dodge
Noah A. Smith
33
167
0
16 Apr 2020
Mix-n-Match: Ensemble and Compositional Methods for Uncertainty
  Calibration in Deep Learning
Mix-n-Match: Ensemble and Compositional Methods for Uncertainty Calibration in Deep Learning
Jize Zhang
B. Kailkhura
T. Y. Han
UQCV
19
220
0
16 Mar 2020
Anomalous Example Detection in Deep Learning: A Survey
Anomalous Example Detection in Deep Learning: A Survey
Saikiran Bulusu
B. Kailkhura
Bo-wen Li
P. Varshney
D. Song
AAML
28
47
0
16 Mar 2020
Real-time Out-of-distribution Detection in Learning-Enabled
  Cyber-Physical Systems
Real-time Out-of-distribution Detection in Learning-Enabled Cyber-Physical Systems
Feiyang Cai
X. Koutsoukos
OODD
121
73
0
28 Jan 2020
Distance-Based Learning from Errors for Confidence Calibration
Distance-Based Learning from Errors for Confidence Calibration
Chen Xing
Sercan Ö. Arik
Zizhao Zhang
Tomas Pfister
FedML
18
39
0
03 Dec 2019
Addressing Failure Prediction by Learning Model Confidence
Addressing Failure Prediction by Learning Model Confidence
Charles Corbière
Nicolas Thome
Avner Bar-Hen
Matthieu Cord
P. Pérez
19
281
0
01 Oct 2019
Density estimation in representation space to predict model uncertainty
Density estimation in representation space to predict model uncertainty
Tiago Ramalho
M. Corbalan
UQCV
BDL
8
37
0
20 Aug 2019
Interpretable Counterfactual Explanations Guided by Prototypes
Interpretable Counterfactual Explanations Guided by Prototypes
A. V. Looveren
Janis Klaise
FAtt
11
378
0
03 Jul 2019
Detecting Adversarial Examples and Other Misclassifications in Neural
  Networks by Introspection
Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection
Jonathan Aigrain
Marcin Detyniecki
AAML
14
30
0
22 May 2019
Tutorial: Safe and Reliable Machine Learning
Tutorial: Safe and Reliable Machine Learning
S. Saria
Adarsh Subbaswamy
FaML
25
82
0
15 Apr 2019
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
Ning Xie
Farley Lai
Derek Doran
Asim Kadav
CoGe
31
321
0
20 Jan 2019
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural
  Networks against Adversarial Malware Samples
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural Networks against Adversarial Malware Samples
Deqiang Li
Ramesh Baral
Tao Li
Han Wang
Qianmu Li
Shouhuai Xu
AAML
17
21
0
18 Sep 2018
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
276
5,660
0
05 Dec 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,136
0
06 Jun 2015
Previous
12