SimpleClick: Interactive Image Segmentation with Simple Vision
Transformers
Click-based interactive image segmentation aims at extracting objects with limited user clicking. A hierarchical backbone is the de-facto architecture for current methods. Recently, the plain, non-hierarchical Vision Transformer (ViT) has emerged as a competitive backbone for dense prediction tasks. This design allows the original ViT to be a foundation model that can be finetuned for downstream tasks without redesigning a hierarchical backbone for pretraining. Although this design is simple and has been proven effective, it has not yet been explored for interactive segmentation. To fill this gap, we propose the first plain-backbone method, termed SimpleClick due to its simplicity in architecture, for interactive segmentation. With the plain backbone pretrained as a masked autoencoder (MAE), SimpleClick achieves state-of-the-art performance. Remarkably, our method achieves 4.15 NoC@90 on SBD, improving 21.8% over the previous best result. Extensive evaluation on medical images shows the generalizability of our method. We also provide a detailed analysis of the computational cost for our method, highlighting its suitability as a practical annotation tool.
View on arXiv