ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.07433
  4. Cited By
Active Bias: Training More Accurate Neural Networks by Emphasizing High
  Variance Samples

Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples

24 April 2017
Haw-Shiuan Chang
Erik Learned-Miller
Andrew McCallum
ArXivPDFHTML

Papers citing "Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples"

31 / 81 papers shown
Title
Dataset Cartography: Mapping and Diagnosing Datasets with Training
  Dynamics
Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics
Swabha Swayamdipta
Roy Schwartz
Nicholas Lourie
Yizhong Wang
Hannaneh Hajishirzi
Noah A. Smith
Yejin Choi
54
429
0
22 Sep 2020
Simplify and Robustify Negative Sampling for Implicit Collaborative
  Filtering
Simplify and Robustify Negative Sampling for Implicit Collaborative Filtering
Jingtao Ding
Yuhan Quan
Quanming Yao
Yong Li
Depeng Jin
19
97
0
07 Sep 2020
Salvage Reusable Samples from Noisy Data for Robust Learning
Salvage Reusable Samples from Noisy Data for Robust Learning
Zeren Sun
Xiansheng Hua
Yazhou Yao
Xiu-Shen Wei
Guosheng Hu
Jian Zhang
NoLa
29
41
0
06 Aug 2020
Adaptive Task Sampling for Meta-Learning
Adaptive Task Sampling for Meta-Learning
Chenghao Liu
Zhihao Wang
Doyen Sahoo
Yuan Fang
Kun Zhang
Guosheng Lin
45
54
0
17 Jul 2020
Learning from Noisy Labels with Deep Neural Networks: A Survey
Learning from Noisy Labels with Deep Neural Networks: A Survey
Hwanjun Song
Minseok Kim
Dongmin Park
Yooju Shin
Jae-Gil Lee
NoLa
24
964
0
16 Jul 2020
Early-Learning Regularization Prevents Memorization of Noisy Labels
Early-Learning Regularization Prevents Memorization of Noisy Labels
Sheng Liu
Jonathan Niles-Weed
N. Razavian
C. Fernandez‐Granda
NoLa
36
557
0
30 Jun 2020
Learning Bounds for Risk-sensitive Learning
Learning Bounds for Risk-sensitive Learning
Jaeho Lee
Sejun Park
Jinwoo Shin
25
46
0
15 Jun 2020
Meta Transition Adaptation for Robust Deep Learning with Noisy Labels
Meta Transition Adaptation for Robust Deep Learning with Noisy Labels
Jun Shu
Qian Zhao
Zengben Xu
Deyu Meng
NoLa
33
29
0
10 Jun 2020
Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain
  Recommendation
Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain Recommendation
A. Krishnan
Mahashweta Das
M. Bendre
Hao Yang
Hari Sundaram
27
64
0
21 May 2020
Distributionally Robust Deep Learning using Hardness Weighted Sampling
Distributionally Robust Deep Learning using Hardness Weighted Sampling
Lucas Fidon
Michael Aertsen
Thomas Deprest
Doaa Emam
Frédéric Guffens
...
Andrew Melbourne
Sébastien Ourselin
Jan Deprest
Georg Langs
Tom Kamiel Magda Vercauteren
OOD
31
10
0
08 Jan 2020
Identifying and Compensating for Feature Deviation in Imbalanced Deep
  Learning
Identifying and Compensating for Feature Deviation in Imbalanced Deep Learning
Han-Jia Ye
Hong-You Chen
De-Chuan Zhan
Wei-Lun Chao
39
99
0
06 Jan 2020
Controllable and Progressive Image Extrapolation
Controllable and Progressive Image Extrapolation
Yijun Li
Lu Jiang
Ming-Hsuan Yang
16
19
0
25 Dec 2019
Carpe Diem, Seize the Samples Uncertain "At the Moment" for Adaptive
  Batch Selection
Carpe Diem, Seize the Samples Uncertain "At the Moment" for Adaptive Batch Selection
Hwanjun Song
Minseok Kim
Sundong Kim
Jae-Gil Lee
27
15
0
19 Nov 2019
Dice Loss for Data-imbalanced NLP Tasks
Dice Loss for Data-imbalanced NLP Tasks
Xiaoya Li
Xiaofei Sun
Yuxian Meng
Junjun Liang
Fei Wu
Jiwei Li
50
569
0
07 Nov 2019
Distribution Density, Tails, and Outliers in Machine Learning: Metrics
  and Applications
Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications
Nicholas Carlini
Ulfar Erlingsson
Nicolas Papernot
OOD
OODD
26
62
0
29 Oct 2019
Learning Data Manipulation for Augmentation and Weighting
Learning Data Manipulation for Augmentation and Weighting
Zhiting Hu
Bowen Tan
Ruslan Salakhutdinov
Tom Michael Mitchell
Eric Xing
29
116
0
28 Oct 2019
Imbalance Problems in Object Detection: A Review
Imbalance Problems in Object Detection: A Review
Kemal Oksuz
Baris Can Cam
Sinan Kalkan
Emre Akbas
ObjD
36
458
0
31 Aug 2019
Submodular Batch Selection for Training Deep Neural Networks
Submodular Batch Selection for Training Deep Neural Networks
K. J. Joseph
R. VamshiTeja
Krishnakant Singh
V. Balasubramanian
15
23
0
20 Jun 2019
Training Data Subset Search with Ensemble Active Learning
Training Data Subset Search with Ensemble Active Learning
Kashyap Chitta
J. Álvarez
Elmar Haussmann
C. Farabet
25
13
0
29 May 2019
Multi-Similarity Loss with General Pair Weighting for Deep Metric
  Learning
Multi-Similarity Loss with General Pair Weighting for Deep Metric Learning
Xun Wang
Xintong Han
Weilin Huang
Dengke Dong
Matthew R. Scott
31
739
0
14 Apr 2019
On The Power of Curriculum Learning in Training Deep Networks
On The Power of Curriculum Learning in Training Deep Networks
Guy Hacohen
D. Weinshall
ODL
39
441
0
07 Apr 2019
IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat
  Examples Equally and Gradient Magnitude's Variance Matters
IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude's Variance Matters
Xinshao Wang
Yang Hua
Elyor Kodirov
David Clifton
N. Robertson
NoLa
24
62
0
28 Mar 2019
Pedestrian Attribute Recognition: A Survey
Pedestrian Attribute Recognition: A Survey
Tianlin Li
Shaofei Zheng
Rui Yang
Aihua Zheng
Zhe Chen
Jin Tang
Bin Luo
CVBM
30
127
0
22 Jan 2019
NIPS - Not Even Wrong? A Systematic Review of Empirically Complete
  Demonstrations of Algorithmic Effectiveness in the Machine Learning and
  Artificial Intelligence Literature
NIPS - Not Even Wrong? A Systematic Review of Empirically Complete Demonstrations of Algorithmic Effectiveness in the Machine Learning and Artificial Intelligence Literature
Franz J. Király
Bilal A. Mateen
R. Sonabend
23
10
0
18 Dec 2018
An Empirical Study of Example Forgetting during Deep Neural Network
  Learning
An Empirical Study of Example Forgetting during Deep Neural Network Learning
Mariya Toneva
Alessandro Sordoni
Rémi Tachet des Combes
Adam Trischler
Yoshua Bengio
Geoffrey J. Gordon
48
715
0
12 Dec 2018
Theory of Curriculum Learning, with Convex Loss Functions
Theory of Curriculum Learning, with Convex Loss Functions
D. Weinshall
D. Amir
24
41
0
09 Dec 2018
Unsupervised Hard Example Mining from Videos for Improved Object
  Detection
Unsupervised Hard Example Mining from Videos for Improved Object Detection
SouYoung Jin
Aruni RoyChowdhury
Huaizu Jiang
Ashish Singh
Aditya Prasad
Deep Chakraborty
Erik Learned-Miller
ObjD
29
63
0
13 Aug 2018
Learning to Reweight Examples for Robust Deep Learning
Learning to Reweight Examples for Robust Deep Learning
Mengye Ren
Wenyuan Zeng
Binh Yang
R. Urtasun
OOD
NoLa
69
1,412
0
24 Mar 2018
Large-scale Cloze Test Dataset Created by Teachers
Large-scale Cloze Test Dataset Created by Teachers
Qizhe Xie
Guokun Lai
Zihang Dai
Eduard H. Hovy
ELM
30
73
0
09 Nov 2017
Convolutional Neural Networks for Sentence Classification
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
312
13,377
0
25 Aug 2014
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
Previous
12