ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.08019
  4. Cited By
Training Set Debugging Using Trusted Items

Training Set Debugging Using Trusted Items

24 January 2018
Xuezhou Zhang
Xiaojin Zhu
Stephen J. Wright
ArXivPDFHTML

Papers citing "Training Set Debugging Using Trusted Items"

16 / 16 papers shown
Title
Learning from Uncertain Data: From Possible Worlds to Possible Models
Learning from Uncertain Data: From Possible Worlds to Possible Models
Jiongli Zhu
Su Feng
Boris Glavic
Babak Salimi
37
0
0
28 May 2024
Manipulating Predictions over Discrete Inputs in Machine Teaching
Manipulating Predictions over Discrete Inputs in Machine Teaching
Xiaodong Wu
Yufei Han
H. Dahrouj
Jianbing Ni
Zhenwen Liang
Xiangliang Zhang
19
0
0
31 Jan 2024
Certifying Data-Bias Robustness in Linear Regression
Certifying Data-Bias Robustness in Linear Regression
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
35
3
0
07 Jun 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
Iterative Teaching by Label Synthesis
Iterative Teaching by Label Synthesis
Weiyang Liu
Zhen Liu
Hanchen Wang
Liam Paull
Bernhard Schölkopf
Adrian Weller
50
16
0
27 Oct 2021
CHEF: A Cheap and Fast Pipeline for Iteratively Cleaning Label
  Uncertainties (Technical Report)
CHEF: A Cheap and Fast Pipeline for Iteratively Cleaning Label Uncertainties (Technical Report)
Yinjun Wu
James Weimer
S. Davidson
23
4
0
19 Jul 2021
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
25
86
0
08 May 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
37
26
0
10 Feb 2021
Efficient Estimation of Influence of a Training Instance
Efficient Estimation of Influence of a Training Instance
Sosuke Kobayashi
Sho Yokoi
Jun Suzuki
Kentaro Inui
TDI
32
15
0
08 Dec 2020
Provable Training Set Debugging for Linear Regression
Provable Training Set Debugging for Linear Regression
Xiaomin Zhang
Xiaojin Zhu
Po-Ling Loh
24
0
0
16 Jun 2020
Complaint-driven Training Data Debugging for Query 2.0
Complaint-driven Training Data Debugging for Query 2.0
Weiyuan Wu
Lampros Flokas
Eugene Wu
Jiannan Wang
32
43
0
12 Apr 2020
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on
  Multiobjective Bilevel Optimisation
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation
Javier Carnerero-Cano
Luis Muñoz-González
P. Spencer
Emil C. Lupu
AAML
28
11
0
28 Feb 2020
FR-Train: A Mutual Information-Based Approach to Fair and Robust
  Training
FR-Train: A Mutual Information-Based Approach to Fair and Robust Training
Yuji Roh
Kangwook Lee
Steven Euijong Whang
Changho Suh
24
78
0
24 Feb 2020
Less Is Better: Unweighted Data Subsampling via Influence Function
Less Is Better: Unweighted Data Subsampling via Influence Function
Zifeng Wang
Hong Zhu
Zhenhua Dong
Xiuqiang He
Shao-Lun Huang
TDI
23
51
0
03 Dec 2019
Data Cleansing for Models Trained with SGD
Data Cleansing for Models Trained with SGD
Satoshi Hara
Atsushi Nitanda
Takanori Maehara
TDI
34
68
0
20 Jun 2019
Learning Implicit Generative Models by Teaching Explicit Ones
Learning Implicit Generative Models by Teaching Explicit Ones
Chao Du
Kun Xu
Chongxuan Li
Jun Zhu
Bo Zhang
DRL
GAN
14
9
0
10 Jul 2018
1