304

Group DETR: Fast Training Convergence with Decoupled One-to-Many Label Assignment

IEEE International Conference on Computer Vision (ICCV), 2022
Abstract

Detection Transformer (DETR) relies on One-to-One label assignment, i.e., assigning one ground-truth (gt) object to only one positive object query, for end-to-end object detection and lacks the capability of exploiting multiple positive queries. We present a novel DETR training approach, named {\em Group DETR}, to support multiple positive queries. To be specific, we decouple the positives into multiple independent groups and keep only one positive per gt object in each group. We make simple modifications during training: (i) adopt KK groups of object queries; (ii) conduct decoder self-attention on each group of object queries with the same parameters; (iii) perform One-to-One label assignment for each group, leading to KK positive object queries for each gt object. In inference, we only use one group of object queries, making no modifications to both architecture and processes. We validate the effectiveness of the proposed approach on DETR variants, including Conditional DETR, DAB-DETR, DN-DETR, and DINO.

View on arXiv
Comments on this paper