ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.02868
  4. Cited By
A Human-Centric Take on Model Monitoring

A Human-Centric Take on Model Monitoring

6 June 2022
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
ArXivPDFHTML

Papers citing "A Human-Centric Take on Model Monitoring"

8 / 8 papers shown
Title
Explanatory Model Monitoring to Understand the Effects of Feature Shifts
  on Performance
Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance
Thomas Decker
Alexander Koebler
Michael Lebacher
Ingo Thon
Volker Tresp
Florian Buettner
16
1
0
24 Aug 2024
Visibility into AI Agents
Visibility into AI Agents
Alan Chan
Carson Ezell
Max Kaufmann
K. Wei
Lewis Hammond
...
Nitarshan Rajkumar
David M. Krueger
Noam Kolt
Lennart Heim
Markus Anderljung
13
31
0
23 Jan 2024
Measuring Distributional Shifts in Text: The Advantage of Language
  Model-Based Embeddings
Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings
Gyandev Gupta
Bashir Rastegarpanah
Amalendu Iyer
Joshua Rubin
K. Kenthapadi
27
2
0
04 Dec 2023
Can You Rely on Your Model Evaluation? Improving Model Evaluation with
  Synthetic Test Data
Can You Rely on Your Model Evaluation? Improving Model Evaluation with Synthetic Test Data
B. V. Breugel
Nabeel Seedat
F. Imrie
M. Schaar
SyDa
24
19
0
25 Oct 2023
Monitoring Machine Learning Models: Online Detection of Relevant
  Deviations
Monitoring Machine Learning Models: Online Detection of Relevant Deviations
Florian Heinrichs
19
2
0
26 Sep 2023
Interpretability and Transparency-Driven Detection and Transformation of
  Textual Adversarial Examples (IT-DT)
Interpretability and Transparency-Driven Detection and Transformation of Textual Adversarial Examples (IT-DT)
Bushra Sabir
Muhammad Ali Babar
Sharif Abuadbba
SILM
8
8
0
03 Jul 2023
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,672
0
28 Feb 2017
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,082
0
24 Oct 2016
1