From Label Error Detection to Correction: A Modular Framework and Benchmark for Object Detection Datasets
Object detection has advanced rapidly in recent years, driven by increasingly large and diverse datasets. However, label errors often compromise the quality of these datasets and affect the outcomes of training and benchmark evaluations. Although label error detection methods for object detection datasets now exist, they are typically validated only on synthetic benchmarks or via limited manual inspection. How to correct such errors systematically and at scale remains an open problem. We introduce a semi-automated framework for label error correction called Rechecked. Building on existing label error detection methods, their error proposals are reviewed with lightweight, crowd-sourced microtasks. We apply Rechecked to the class pedestrian in the KITTI dataset, for which we crowdsourced high-quality corrected annotations. We detect 18% of missing and inaccurate labels in the original ground truth. We show that current label error detection methods, when combined with our correction framework, can recover hundreds of errors with little human effort compared to annotation from scratch. However, even the best methods still miss up to 66% of the label errors, which motivates further research, now enabled by our released benchmark.
View on arXiv