131

Active Learning for Crowd-Sourced Databases

Abstract

In this paper, we present algorithms for integrating machine learning algorithms for acquiring labeled data into crowd-sourced databases. The key observation is that there are a number of tasks for which humans and machine learning algorithms can be complementary, e.g., at labeling images where humans generally provide more accurate labels but are slow and expensive, while algorithms are usually less accurate but faster and cheaper. Based on this, we present two active learning algorithms designed to decide how to use humans and algorithms together in a crowd-sourced database. We look at two settings, namely the upfront and the iterative settings. In the upfront setting, we try to identify items that would be hard for algorithms to label, and ask humans to label them. In the iterative setting, we iteratively choose the best items to ask humans to label and retrain the model after incorporating these so as to improve the quality of the classifier. We propose several different algorithms in each of these settings, based on the theory of non-parametric bootstrap, which makes our results applicable to a broad class of machine learning models. We also look at a range of issues specific to crowds, such as the fact that crowd-generated labels can be incorrect, and that multiple crowd workers can be "batched" to perform labeling on several different items simultaneously. Our results, on three data sets collected with Amazon's Mechanical Turk, and on 15 data sets from the UCI KDD archive, show that our methods on average ask one to two orders of magnitude fewer questions than a random baseline, and two to eight times fewer questions than previous active learning schemes.

View on arXiv
Comments on this paper