Optimality of Belief Propagation for Crowdsourced Classification: Proof
for Arbitrary Number of Per-worker Assignments
Crowdsourcing systems are popular for solving large-scale labelling tasks with low-paid (or even non-paid) workers. We study the problem of recovering the true labels from the possibly erroneous crowdsourced labels under the popular Dawid-Skene model. To address this inference problem, several algorithms have recently been proposed, but the best known guarantee is still significantly larger than the fundamental limit. In our previous work, we closed this gap under a canonical assumption where each worker is assigned only two tasks, i.e., , and each task is assigned to sufficiently but constantly many workers, . In this work, we further remove the condition on r and show that for all , Belief Propagation exactly matches a lower bound on the fundamental limit if . The guaranteed optimality of BP is the strongest in the sense that it is information-theoretically impossible for any other algorithm to correctly label a larger fraction of the tasks. In the general setting, regardless of the number of workers assigned to a task, we establish the dominance result on BP that it outperforms all existing algorithms with provable guarantees. Experimental results suggest that BP is close to optimal for all regimes considered, while all other algorithms show suboptimal performances in certain regimes.
View on arXiv