ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.00403
  4. Cited By
Can You Trust This Prediction? Auditing Pointwise Reliability After
  Learning

Can You Trust This Prediction? Auditing Pointwise Reliability After Learning

2 January 2019
Peter F. Schulam
S. Saria
    OOD
ArXivPDFHTML

Papers citing "Can You Trust This Prediction? Auditing Pointwise Reliability After Learning"

20 / 20 papers shown
Title
Deeper Understanding of Black-box Predictions via Generalized Influence
  Functions
Deeper Understanding of Black-box Predictions via Generalized Influence Functions
Hyeonsu Lyu
Jonggyu Jang
Sehyun Ryu
H. Yang
TDI
AI4CE
18
5
0
09 Dec 2023
Metrics reloaded: Recommendations for image analysis validation
Metrics reloaded: Recommendations for image analysis validation
Lena Maier-Hein
Annika Reinke
Patrick Godau
M. Tizabi
Florian Buettner
...
Aleksei Tiulpin
Sotirios A. Tsaftaris
Ben Van Calster
Gaël Varoquaux
Paul F. Jäger
22
214
0
03 Jun 2022
A Cheap Bootstrap Method for Fast Inference
A Cheap Bootstrap Method for Fast Inference
H. Lam
14
11
0
31 Jan 2022
Algorithmic encoding of protected characteristics in image-based models
  for disease detection
Algorithmic encoding of protected characteristics in image-based models for disease detection
Ben Glocker
Charles Jones
Mélanie Bernhardt
S. Winzeck
21
9
0
27 Oct 2021
Detecting and Mitigating Test-time Failure Risks via Model-agnostic
  Uncertainty Learning
Detecting and Mitigating Test-time Failure Risks via Model-agnostic Uncertainty Learning
Preethi Lahoti
Krishna P. Gummadi
G. Weikum
26
3
0
09 Sep 2021
On the Importance of Regularisation & Auxiliary Information in OOD
  Detection
On the Importance of Regularisation & Auxiliary Information in OOD Detection
John Mitros
Brian Mac Namee
13
2
0
15 Jul 2021
Test for non-negligible adverse shifts
Test for non-negligible adverse shifts
Vathy M. Kamulete
15
3
0
07 Jul 2021
Quality Assurance Challenges for Machine Learning Software Applications
  During Software Development Life Cycle Phases
Quality Assurance Challenges for Machine Learning Software Applications During Software Development Life Cycle Phases
Md. Abdullah Al Alamin
Gias Uddin
24
11
0
03 May 2021
Influence Based Defense Against Data Poisoning Attacks in Online
  Learning
Influence Based Defense Against Data Poisoning Attacks in Online Learning
Sanjay Seetharaman
Shubham Malaviya
KV Rosni
Manish Shukla
S. Lodha
TDI
AAML
28
9
0
24 Apr 2021
CheXbreak: Misclassification Identification for Deep Learning Models
  Interpreting Chest X-rays
CheXbreak: Misclassification Identification for Deep Learning Models Interpreting Chest X-rays
E. Chen
Andy Kim
R. Krishnan
J. Long
A. Ng
Pranav Rajpurkar
21
2
0
18 Mar 2021
FastIF: Scalable Influence Functions for Efficient Model Interpretation
  and Debugging
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging
Han Guo
Nazneen Rajani
Peter Hase
Mohit Bansal
Caiming Xiong
TDI
19
102
0
31 Dec 2020
Ramifications of Approximate Posterior Inference for Bayesian Deep
  Learning in Adversarial and Out-of-Distribution Settings
Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings
John Mitros
A. Pakrashi
Brian Mac Namee
UQCV
11
2
0
03 Sep 2020
Beyond Point Estimate: Inferring Ensemble Prediction Variation from
  Neuron Activation Strength in Recommender Systems
Beyond Point Estimate: Inferring Ensemble Prediction Variation from Neuron Activation Strength in Recommender Systems
Zhe Chen
Yuyan Wang
Dong Lin
D. Cheng
Lichan Hong
Ed H. Chi
Claire Cui
28
16
0
17 Aug 2020
Influence Functions in Deep Learning Are Fragile
Influence Functions in Deep Learning Are Fragile
S. Basu
Phillip E. Pope
S. Feizi
TDI
6
219
0
25 Jun 2020
Calibrating Deep Neural Network Classifiers on Out-of-Distribution
  Datasets
Calibrating Deep Neural Network Classifiers on Out-of-Distribution Datasets
Zhihui Shao
Jianyi Yang
Shaolei Ren
OODD
27
11
0
16 Jun 2020
SafeML: Safety Monitoring of Machine Learning Classifiers through
  Statistical Difference Measure
SafeML: Safety Monitoring of Machine Learning Classifiers through Statistical Difference Measure
Koorosh Aslansefat
Ioannis Sorokos
D. Whiting
Ramin Tavakoli Kolagari
Y. Papadopoulos
10
34
0
27 May 2020
Anomalous Example Detection in Deep Learning: A Survey
Anomalous Example Detection in Deep Learning: A Survey
Saikiran Bulusu
B. Kailkhura
Bo-wen Li
P. Varshney
D. Song
AAML
13
47
0
16 Mar 2020
Tutorial: Safe and Reliable Machine Learning
Tutorial: Safe and Reliable Machine Learning
S. Saria
Adarsh Subbaswamy
FaML
23
82
0
15 Apr 2019
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
270
5,660
0
05 Dec 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
261
9,134
0
06 Jun 2015
1